Virtual Reality Tools for Internet Robotics

Proceedings of the 2001 IEEE International Conference on Robotics & Automation Seoul, Korea • May 21-26, 2001 Virtual Reality Tools for Internet Robo...
4 downloads 0 Views 548KB Size
Proceedings of the 2001 IEEE International Conference on Robotics & Automation Seoul, Korea • May 21-26, 2001

Virtual Reality Tools for Internet Robotics Igor R. Belousov a

Ryad Chellali

Gordon J. Clapworthy

Keldysh Institute of Applied Mathematics, Russian Academy of Sciences 4, Miusskaya Square, Moscow 125047, Russia [email protected]

l'Institut de Recherche en Communications et Cybernetique de Nantes (IRCCyN) 1, rue de la Noe, BP 92 101, 44321 Nantes France [email protected]

De Montfort University Dept. of Computer & Information Sciences Hammerwood Gate, Kents Hill, Milton Keynes MK7 6HP, United Kingdom [email protected]

images are severely delayed (from a few seconds to several minutes). To avoid this limitation, methods of virtual 3D display of the robot and worksite should be applied. Some of the current Internet-based systems provide virtual displays of the current state of the robot. However, these are either pure 2D displays (for mobile robot control using 2D active maps, [2, 5]), or 3D displays that allow only simulation of the robot motion (e.g. for mission preparation), rather than real-time control [3]. Our goal was to use a dynamic, 3D virtual environment for real-time, on-line robot control. A 3D reconstruction of the robot's working environment was produced, and the robot, and the objects with which the robot interacts, were placed into it. The operator sends the control commands to the robot, and small data parcels containing the current coordinates of the robot and the objects are transmitted to the visualisation module at the operator’s site. These are then rendered in real time, allowing time delays to be suppressed (Fig. 1). This scheme considerably reduces system traffic as compared to TV-image transmission (see subsection 2.3 for estimation of the size of the data parcels). It allows the robots to be successfully controlled, even when communication rates are slow (0.1-0.5 KB/sec).

Abstract A virtual control environment for robot teleoperation via the Internet is presented. It comprises a Java3D-based real-time virtual representation of the robot and worksite, and uses a graphic panel, an environment for remote robot programming, and a dataglove with a 6D position tracker as the control interfaces. The use of Virtual Reality (VR) techniques for Internet teleoperation allow: (1) time delays inherent in IP networks to be suppressed, and (2) the operator's work to be simplified and accelerated, compared to methods that use delayed TV images. The system realisation, with its use of open technologies Java, Java3D and 3-tier client/server architecture, provides portability among different computer platforms and types of robots. The efficiency of the VR-based methods developed has been verified for slow communication rates (0.1-0.5 KB/sec), where TVbased control methods are inapplicable. VR systems have been developed for the WWW-based control of the PUMA and CRS industrial robot manipulators. The particulars of these systems, the experiments undertaken, current issues, and directions of future work are presented.

1. Introduction Robot control via the Internet is a growing and highlypromising branch of scientific research, industry and entertainment. Possible applications of Internet robotics are remote education, remote manufacturing [1], virtual visits to places of attraction (museums, parks, etc.) [2], and remote control of personal robots in the office or the home. Internet media have even been used for controlling objects in space (e.g. operations for the Mars polar lander mission, [3]). However, progress in this area has being hindered by current Internet limitations on communication bandwidth. Several systems have been under permanent on-line control over the last few years (the list of active systems providing free access through Web browsers is presented on the NASA Telerobotics Web-page [4]). While they provide the important possibility for all the Internet users to control the robots, these systems also have some disadvantages; in particular, none of them allows effective control because of the extremely slow replies by the system to the operator’s actions. Control within all of these systems is based purely only TV images, and these a

Figure 1. System operation. This method has been tested for Internet-based control of the PUMA robot manipulator [6-8], and the mobile robot “Diligent” (Nomadic XR4400) [9] and has been shown to provide significantly-improved efficiency in the operator’s work. In this paper we present the latest results of the system-architecture development, new experiments on the control of the PUMA via Internet, a new system for Internet-based control of the CRS A465 robot manipulator, and experiments with this robot. The novelty of our system in the Internet robotics domain is also defined by the use of a tool for remote

Author is supported by the INTAS grant YSF 99-4017

0-7803-6475-9/01/$10.00© 2001 IEEE

1878

robot programming [6, 7]. This gives the operator the useful possibility of programming such complicated robot actions as pick-and-place operations and assembly, within the control environment, during the current control session. This significantly simplifies the problem of remote robot control. The paper is organised as follows. Section 2 describes the hardware, the software architecture and data flows between the system components. Methods for 3D reconstruction of the robot working space are presented in Section 3. Section 4 contains a description of the VRbased robot-control environment - Java3D visualisation of the robot site, and VR control interface (dataglove with 6D position tracker). Section 5 describes the experiments on Internet-based control of the PUMA and CRS robot manipulators, and the directions of future experiments. Section 6 concludes the paper, reflecting on the key features and novel aspects of the system developed.

2. System architecture This chapter presents the system hardware, software architecture and the organisation of the data exchange (Fig. 2). We emphasise the solutions that have been chosen to provide fast system reply to operator actions, and system portability among different computer platforms and types of robots.

Figure 2. System architecture.

contains a computer and, for the CRS manipulator only, a dataglove with a 6D tracker to control the robot. Since all client software is realised using open technologies – Java and Java3D – it is possible to use any type of computer and operating system as the client (clients on PC/Windows and SUN/Solaris platforms have been tested). The only restriction is that the client computer should be powerful enough to render Java3D scenes at an acceptable frame rate (OpenGL accelerator and 64 MB RAM are desirable). The robot controllers were connected to the server computer via RS232 serial interface. Communication between the server and client parts was performed using TCP/IP. The dataglove and position tracker were plugged into the client computer via an RS232 interface.

2.2 Software architecture The software of the robot part of the system provides communication with the server part (bi-directional data exchange) and control of the robot itself. It has been realised using VAL-type languages. The server part of the system contains the software modules for the data exchange with both the robot and the client parts, and a module for TV-image processing. Two realisations of the server (Java and C++) were created. The software of the client part consists of modules for communication with the server, a module for robot control (using a graphic control panel or a dataglove with position tracker), and modules for visualisation of the robot and working environment – 3D graphic representation and TV images. All modules of the client have been realised using Java and Java3D and can work either as a Java application or as an applet running in a Web page. Standard software needed for the system operation includes: (1) for the server part – a Web server, if the client should be used as an applet (Web server WebSite 1.1 has been used); (2) for the client part – Java2 with a Java3D extension. To use the client as an applet, any Web browser with JDK 1.1 support will do (like MS IE or Netscape, versions 4.0 and higher).

2.1 Hardware architecture 2.3 Communication protocol and data flows The system for robot control via Internet contains 3 main parts – robot, server and client. The robot part consists of the robot manipulator itself and the robot controller. We used the PUMA 560 and the CRS A465 – both are 6DOF articulated manipulators with the same kinematics scheme and similar dimensions. The server part contains the server computer and a TV camera with an image acquisition board. For control of the PUMA and CRS manipulators, we used PCs as the server computers, with Intel Pentium/166 and Intel Pentium II/500 processors, respectively. The client part

The client sends two forms of communication to the server: control commands, and requests for the robot and object state. The control thread contains commands that are generated by the operator using the graphic control panel or the dataglove – the client sends these asynchronously. The second thread is an infinite loop in which the client sends a request for robot/object states (several times per second depending on the rate of IP connection; average frequency was about 5 data sets per second).

1879

Using these data, the client visualises the current state of the robot and working environment. Note that all kinematics transformations and motion planning processes are implemented solely in the robot controller. This increases client and server productivity, and allows reuse, almost without changes, of software previously developed for controlling different types of robots via the Internet. Connections between the client and server parts were organised using ASCII strings. Each data parcel is a string containing the code of the command (integer value) and the associated parameters. To minimise the length of the parcels, all parameters were represented as integers. The most critical parcel – one from the server containing the robot state parameters – has a size of 42 bytes, providing an accuracy of 0.05 mm and 0.05 grad. The parcel with object coordinates contains 38 bytes. Thus, 80 bytes must be received to enable the client to visualise a 3D model of the robot and the variable part of the environment in its current state. This allows a frame rate of 12 images/sec on a 1 KB/sec Internet connection – a sufficiently-high scene-update rate for successful remote robot control, even for low-bandwidth connections that are unsuitable for methods relying on image transmission.

parameters as the real one. This is achieved using the Tsai calibration method [13]. The second step is to use a developed library of known objects. This is a collection of objects constructed by hand. For instance, for the “puzzle experiment”, 5 objects were designed. We integrate the objects into the environment incrementally, i.e. we first superimpose the virtual object on the real image and then change its parameters such that it meets its real image. This process is repeated for all the known objects.

Figure 3. LEGO scene construction.

3. 3D scene reconstruction 3.2 Augmented virtual scenes As indicated above, if one employs a video channel to support visual feedback, the data flow exchanged between the client and the server (the master and the slave systems) will not be convenient for real time Internetbased applications as the potential bandwidth will be insufficient, and delays will appear, making the system unstable. Therefore, we use a synthetic representation of the remote scene. Unfortunately, this has two deficiencies: •

As stated above, the virtual model may be refreshed. This means that we have to verify that the geometrical parameters of the remote scene objects are well estimated.

it does not include the complete remote scene model



it does not take into account the real configurations of the remote objects. Indeed, knowledge of the remote environment is limited to the robot manipulator and a few specific objects. Likewise, knowledge of the dynamics of these objects is limited to the manipulator (only the manipulator is equipped by sensors). To have a visual closed loop, one needs continually to refresh the synthetic scene with respect to the real one. This is achieved in two ways : •

For known objects we calculate their geometrical configuration



For unknown objects we detect their presence and superimpose their image on the virtual scene

3.1 Known scene objects - 3D construction The first step in building the 3D geometry of the remote scene is to generate a virtual camera with the same

Figure 4. Robot error detection. Unfortunately, some errors may appear [11,12]. For instance, the manipulator is assumed to be gripping an object and it does not in the real situation. This situation is detected and signalled to the operator by superimposing the corresponding part of the real image to the synthetic one (Fig.4). In Fig. 5, the flat plane is assumed to be free of objects. In the real scene a small box appeared. This event is displayed to the master system by including the image of the small box into the virtual scene. The converse

1880

situation can also be detected: a disappearing object is signalled to the operator. For known objects, our approach enables us to maintain a realistic model by correcting estimation errors. The operator is fully informed about the remote environment with a minimal amount of data exchange.

Figure 5. Unexpected object appearing.

estimating the robot position and distances between objects in the scene. One way to provide suitable control conditions for the operator is by using the 3D model of the robot and the environment. This allows the time delays to be reduced to acceptable proportions and provides fast response of the system to the operator actions. Also important are the control possibilities provided by this environment, such as changing the viewpoint, zooming into/out of the scene, using semitransparent images, etc. 3D virtual representations of the robots PUMA and CRS and their working environments have been created using Java3D (the Web interface for CRS control is presented on Fig. 6). One of the techniques used to simplify task performance (e.g. grasping the upper box, Fig. 7) was as follows. We added a semitransparent copy of this object to the robot’s grip. To perform grasping, the operator had to provide coincidence of the semitransparent image with the goal object (Fig. 7). This method significantly simplified and accelerated task completion.

4. Virtual control environment This section contains the description of the virtual reality tools developed to control the robots via the Internet. The virtual working environment and the virtual control interface using a dataglove with position tracker are presented.

4.1 Java3D visualisation Experiments on controlling the PUMA and CRS robots via the Internet has revealed that successful robot control based on TV-image information is impossible under the existing Internet communication rates.

Figure 7. Use of transparency. The open technology used allows the virtual control environment to run on any type of computer platforms. Moreover, any user of the Internet can use this environment for controlling our robots from within a Web page using standard Web browser. Since Java3D is based on the OpenGL library, it runs fast on computers with OpenGL acceleration boards. For the scene in Fig. 6, we achieved a refresh rate of 25 frames/sec on a medium PC. Such Java3D features as automatic collision detection and the generation of stereo images also provide attractive possibilities for future steps of the work.

4.2 Robot control with a dataglove

Figure 6. Web interface for robot CRS control. TV images are substantially delayed, and the size and quality of the images are insufficient for the operator to perform the required task as the operator has difficulty in

To enable transparent control of the robot, we implemented a dataglove interface. The Cyber Touch dataglove is equipped with a 3D Polhemus tracker to calculate the 6 parameters (position and orientation) of the operator's hand. Transparent control is a concept that enables operators unfamiliar with robotics to use the telerobot in a natural way: for instance, one need not think about the kinematics of the robot to handle an assembly task. We also developed software-based collision detection to provide

1881

virtual tactile feedback. This module is used for both manipulation tasks and feasibility verification of trajectories. By combining visual and tactile feedback, we simplified the use of our telerobot and opened it to largescale use.

approach, effective control of the robot was possible: only pertinent data were exchanged, namely controls of the robot, robot generalised coordinates, robot status and the position of the box to grasp. The system was successfully demonstrated during the "Days of Digital Technologies" in Montaigu (France), when the robot was controlled from the exhibition hall using a connection via phone line and mobile phone, at a distance of over 50 km. The web interface for this experiment is presented in the Fig. 6.

5.2 PUMA 560 control The PUMA robot, located in Moscow, has been controlled, on different occasions, from Milton Keynes (UK), Toulouse and Nantes (France). The goal of the experiments was to grasp a rod suspended on 2 threads attached to its ends (Fig. 10).

Figure 8. The robot transparent control interface.

5. Experiments on Internet robot control Experiments on the control of the robot manipulator CRS (located in Ecole de Mines de Nantes, France) and PUMA (located in Keldysh Institute of Applied Mathematics, Moscow, Russia) are presented.

5.1 CRS A465 control The goal of the experiment with the CRS robot was to use the VR control interface to grasp the object (LEGO box, Fig. 9).

Figure 9. Grasping the box (image from TV camera). The first experiment with the robot was conducted in 1996 [14]; an ISDN line was used to handle a life-videobased “puzzle assembly” task. Recently, we tested the control of the CRS465 using both a normal phone line (38400 kbs) and a GSM mobile phone line (9600 kbs). We showed that the communication rates of both channels were insufficient to support video feedback – an inherent delay was introduced. Using the virtual augmented

Figure 10. Grasping the rod (image from TV camera). The communication rate for all of these sessions was extremely low, about 100 bytes/sec in average, so nearly 20 seconds were needed to receive every portion of the TV data. Accomplishing the task using only TV images was impossible. However, by using the virtual environment, the rod was grasped successfully. Two types of communication scenario were used. In the first, scene was redrawn when the robot arrived at the desired location; here, the typical delay in receiving the states of the robot and object and in refreshing the virtual environment was about 5 sec. In the second, the desired robot position was calculated at the server side, and transmitted simultaneously to the robot controller and to the operator’s control environment; here, the delay was about 1 sec. Working in a 3D virtual environment proved to be very comfortable for the operator; the time needed to perform the grasping varied from 1 to 2 minutes.

5.3 Current issues and directions of the future experiments While the experiments described above demonstrated advantages of using the virtual reality tools for Internetbased teleoperation, they also raised some other issues.

1882

The first was temporary blocking of the data exchange process when controlling the robot over a long distance (England-Russia, France-Russia). The second was the need to adjust the 3D model of the robot's working environment manually; this was necessary because the robot grip and the parameters of the objects that the robot was interacting with, could not always be measured with the required accuracy. In that case, corresponding amendments should be added to the model at the calibration stage. Future research and experiments will focus on the following goals. The first is immersion of the operator in a virtual 3D environment using a stereo head-mounted display. The second is developing methods that will allow interaction via Internet not only with static objects, but also, in real time, with moving objects. To achieve this goal, methods for predicting the object's motion will be applied [10]. Motion prediction will compensate for delays caused by both TV image processing and data transmission via the Internet.

Virtual reality tools for robot teleoperation via the Internet have been developed. The system comprises a Java3D-based real-time virtual representation of the robot and worksite, graphic control panel, an environment for remote robot programming, and a dataglove with a 6D position tracker as the control interface. Two systems for WWW-based control of the PUMA and CRS robot manipulators have been developed using these tools. Experiments with these systems on robot control from a long distance (England-Russia, France-Russia) with a low-speed Internet connection revealed that only real-time 3D virtual environments should be used for efficient robot control. TV images produce intrinsic delays and should only be used, from time to time, for checking that the system operates normally, or that some particular event (grasping or dropping the object) has really occurred. Using a 3D virtual environment simplifies the performance of operations (allowing changes of viewpoint, zooming, the use of semitransparent images, etc.) and, of great importance, suppresses time delays by up to a factor of 20, compared to TV images. System realisation with open technologies Java, Java3D and 3-tier client/server architecture provided system portability among the different computer platforms and types of robots.

The authors would like to thank Prof. Victor Sazonov from the Keldysh Institute of Applied Mathematics for his kind assistance in performing the experiments on the PUMA robot.

[1] R.Luo, W.Lee et al, “Tele-Control of Rapid Prototyping Machine Via Internet for Automated Tele Manufacturing”, Proc. Of the IEEE International Conference on Robotics & Automation ICRA’99, Detroit (USA), May 1999, pp.2203-2208. [2] S.Thurn, M.Bennewitz et al, “MINERVA: a Second Generation Museum Tour-Guide Robot ”, Proc. Of the IEEE International Conference on Robotics & Automation ICRA’99, Detroit (USA), May 1999, pp.1999-2005. [3] P.Backes, K.Tso et al, “Internet-based Operations for the Mars Polar Lander Mission”, Proc. Of the IEEE International Conference on Robotics & Automation ICRA’2000, San Francisco (USA), April 2000, pp.2025-2032. [4] NASA Space Telerobotics Program, http://rainer.oact.hq. nasa.gov/telerobotics_page/telerobotics.shtm. [5] S.Grange, T.Fong, C.Baur, “Effective Vehicle Teleoperation on the World Wide Web”, Proc. Of the IEEE International Conference on Robotics & Automation ICRA’2000, San Francisco (USA), April 2000, pp.2007-2012. [6] I.Belousov, J.Tan, G.Clapworthy, “Teleoperation and Java3D Visualization of a Robot Manipulator Over the World Wide Web”, Proc. Information Visualisation IV’99, July 1999, pp.543-548, IEEE Computer Society Press.

6. Conclusion

Acknowledgements

References

[7] J.Tan, I.Belousov, G.Clapworthy, “ A Virtual Environment Based User Interface for Teleoperation of a Robot Using the Internet”, Proc. Sixth UK VR-SIG Conference, Salford (U.K.), Sept. 1999, pp.145-154. [8] G.Clapworthy, I.Belousov et al., “Medical Visualisation, Biomechanics, Figure Animation and Robot Teleoperation: Themes and Links”, NATO Advanced Research Workshop on the Convergence of Computer Vision & Computer Graphics, Kluwer Publishers, 2000 (to appear). [9] R.Alami, I.Belousov, S.Fleury et al., "Diligent: Towards a human-friendly navigation system", Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems IROS’2000, Oct. 30 – Nov. 5, 2000, Takamatsu (Japan). [10] D. Okhotsimsky, A. Platonov, I. Belousov et al., “Real-Time Hand-Eye System: Interaction with Moving Objects”, Proc. of the IEEE International Conference on Robotics & Automation ICRA’98, Leuven (Belgium), May 1998, pp.1683-1688. [11] C.Sayers, R.Paul., “An operator interface for teleprogramming employing synthetic fixtures”, Presence, 1994. [12] C. Sayers, M. Stein, A. Lai and R. Paul., “Teleprogramming to perform sophisticated underwater manipulative tasks using acoustic communications”, Proc. of the IEEE Oceans’94, 1994. [13] R.Tsai. A versatile camera calibration technique for high accuracy 3D machine vision metrology using off-shelf TV cameras and lenses, IEEE Jnl of Robotics & Automation, 1987. [14] A. Kheddar, C. Tzafestas, P. Coiffet, T. Kotoku, S. Kawbata, K. Iwamoto, K. Tanie, I. Mazon, C. Laugier, R. Chellali, “Parallel Multi-Robot Long Distance Teleoperation”, Proc. of the International Conference on Advanced Robotics ICAR’97, Monterey (USA), July 1997.

1883