3D Scene Reconstruction Based on a Moving 2D Laser Range Finder for Service-Robots

3D Scene Reconstruction Based on a Moving 2D Laser Range Finder for Service-Robots Denis Klimentjew, Mehtap Arli, Jianwei Zhang Dept. of Informatics, ...
Author: Brendan Young
10 downloads 1 Views 1MB Size
3D Scene Reconstruction Based on a Moving 2D Laser Range Finder for Service-Robots Denis Klimentjew, Mehtap Arli, Jianwei Zhang Dept. of Informatics, University of Hamburg, Vogt-K¨olln-Straße 30, 22527 Hamburg, Germany {klimentjew, 1arli, zhang}@informatik.uni-hamburg.de Abstract— This paper presents the simulation of a threedimensional laser range finder based on a two-dimensional laser scanner and different moving units. We examine and describe two methods differing in the way the laser range finder is mounted. In addition, a Pan-Tilt-Unit and a robot manipulator are used as moving platforms. The qualities of the laser scanner and the originating point density, which are very important for the system design, as well as the mathematical grounds for the reconstruction will be introduced and discussed in detail. In the next step, the resulting transformation matrixes and error compensation will be reviewed. Relevant to the moving units, the registration methods and the possible scan strategy are described and discussed. The concurrent application of both systems permits the viewing of the scene from different perspectives. The surroundings can be reconstructed with the help of mathematical transformations depending on the physical design, the resulting structure consists of unorganised point clouds. The achieved results can be visualised with OpenGL or Java3D and used for surface reconstruction. This way, typical robotic tasks like collision avoidance, grasp calculation, or handling of objects can be realised. Index Terms— 3D reconstruction, laser range finder, unorganised point clouds, 3D model acquisition, robotics.

I. INTRODUCTION As stereo vision systems come at a tenth of the price of laser range finders, the former still constitute the most important interface between a robot and the outside world in the field of autonomous robots to date. However, as their development progresses, the prices of laser range finders decrease, making them available to an increasing number of research groups. Both kinds of sensors offer different advantages for different tasks. The advantages of the camera acquisition of images are a wide range, a higher resolution and a shorter data acquisition time. Color information is available and the costs of the images are low. On the other hand, a laser range finder offers the direct acquisition of 2D points, notably without the computation overhead, an enormous amount of 2D points on surfaces, short acquisition time and independence of lighting conditions. These features render larger range finders ideal for the description of irregular surfaces. As a result, 2D laser scanners are used increasingly for the navigation of mobile robots. However, 3D laser scanners are still too expensive and rare, although their use would have enormous advantages. The depth information they supply permits safe collision avoidance and navigation. This is a real advantage over the camera-based methods for depth reconstruction, as the quality of image-based algorithms for

depth reconstruction depends on several characteristics [1], e.g. texture, non-homogeneous regions et cetera. Normally, although they yield unequally good results for edges and texturised areas, they fail when used for homogeneous regions. Here most camera-based methods achieve bad results in reconstructing the depth of continuous surfaces. This is a problem especially for service robotics where exact environmental models are very important. The handling of objects without an exact model of the environment is not possible in most cases. Combining a 2D laser range finder with a mobile unit permits the simulation of a 3D laser range finder. In our experiments we use the URG-04LX [2] from HOKUYO AUTOMATIC CO., LTD. with the SOKUIKI sensor [3] to simulate a 3D laser scanner. The surroundings can be reconstructed with the help of mathematical transformations depending on the physical design, which results in a structure consisting of unorganised point clouds. This scientific approach is very new and has not been thoroughly examined to date. Recently, some research groups have started the attempt to realise this concept. Most solution attempts are limited either to a platform rotated to an exact position, to certain surroundings [4] or are too big and heavy for mobile robots [5]. Others are conceived specifically for certain tasks [6] or are still in planning [7]. All of these methods use the stop-scan strategy and mostly allow only a certain number of laser range finder positions and thus a fixed point density. This aspect limits the number of applications for the resultant systems. Different resolutions and angle positions of the laser scanner would make the perception system more flexible. Furthermore, the possibility of using the laser range finder as a 2D sensor is another advantage. The surface reconstruction of unorganised point clouds has been an important research topic for many years. There are many algorithms and methods for surface reconstruction based on the grid reduction of the surfaces, distances, shape of the objects or their surfaces [8]. This has permitted the realization of many robotic tasks like collision avoidance, grasp calculation or the handling of objects. This paper is structured as follows: In section II, we present both systems as well as the description of single hardware components. In section III, the mathematical and theoretical fundamentals are described and discussed in detail. In section IV, we describe the implementation of our systems designed for a 3D environmental reconstruction. We discuss our experimental results in section V, and

demonstrate the efficiency of the developed systems and methods. Finally, we present our conclusion and future work in section VI. II. S YSTEM DESIGN In this paper, we examine the simulation of a 3D laser range finder based on 2D laser scanners mounted on two different platforms. We use a Pan-Tilt-Unit and a robot manipulator as movable platforms. The used Pan-Tilt-Unit is the PTU-D46-17.5 from DirectedPerception1 . The PTU is connected through serial-to-usb cables. The maximal speed is up to 300o /sec and resolution up to 0.05143o . We only use the tilt unit with 77.7 DOF [−46.6457o , 31.0628o ] and a C/C++ interface. The first platform is shown in Fig. 1. The laser scanner is protected by vibration dampers which connect it with the movable platform.

Fig. 1. The Hokuyo laser scanner URG-04LX on the DirectedPerception PTU-D46.

light (160 g) laser scanner. We use a SCIP 2.0 communication protocol, which permits a detectable distance of inbetween [20 mm - 5600 mm ]. The URG delivers a value of 682 steps through the USB connection and a 5V DC power o supply, has a 240o angular range and 0.3515625o ( 360 1024 ) angular resolution. The accuracy of the laser scanner is 10 mm on the distance between 20 mm to 1000 mm and 1% of other measurement distances. The laser range finder needs approximately 100 msec for a complete scan. The advantage of both systems is the fact that the laser scanner can still be used in 2D without any physical changes. Depending on the different physical properties of both platforms, a different point density results, which is shown in Fig. 3.

Fig. 3. Point density of both platforms, PTU on the left and the robot arm on the right.

For the second platform (see Fig. 2) we use the robot arm Mitsubishi PA10-6C2 . This manipulator is a part of TASER, our institute’s service robot. The arm has 6 degrees of freedom and moves along a straight line in Cartesian space. The position exactness of the manipulator lies within +/- 0.1 mm [9].

We examined three different HOKUYO URG-04LX laser range finders and tested them under different conditions. In addition, we ascertained some characteristics which are described in the following. Partially, these results are also confirmed in [10]. The laser scanner needs a long “warm-up” time, up to 200 minutes. This “warm-up” time is proportional to the distance. During this time, the laser scanner delivers between 20 - 30 % of invalid (zero) values at the places the scanner receives no reflection.

Fig. 2. On the left TASER, the service robot of the TAMS Group, and on the right the robot arm as a moving platform over a table scene.

Fig. 4. The green line shows the number of invalid (zero) values in relation to the distance. The blue and red lines are the vertical and horizontal size of the laser beam respectively, depending on distance .

Both systems permit an exact and variable control so that different scan distances and positions can be realised. The mounted laser range finder is a Hokuyo URG-04LX with a SOKUIKI sensor [2][3], a compact (50x50x70 mm) and 1 Directed

Perception, http://www.nowa.com.tw/download/pdf/Directed

2 http://www.mitsubishi-heavy.de/c//content/4.html

After the “warm-up” time the number of invalid (zero) values decreases and lies between 7 - 9 % regardless of the distance, see Fig. 4. The distance fluctuation reduces too, to within 10 to 15 mm. Furthermore, it is not possible to detect any objects at 45o . In addition, dark velvets and reflective surfaces strongly falsify the results.

III. T HEORETICAL BACKGROUND After the introduction of system design in section II, we describe the theoretical bases in this section. Distance (m) 0.25 0.50 0.75 1.0 1.50 2.00 2.50 3.00 3.50 4.00 4.50 5.00

Width (m) 0.002 0.005 0.006 0.010 0.013 0.017 0.022 0.028 0.034 0.040 0.044 0.048

Height (m) 0.002 0.004 0.005 0.008 0.010 0.014 0.018 0.022 0.026 0.031 0.037 0.042



c(ϕ)c(θ)  sin(θ)  −c(θ)s(ϕ) 0

One of the first assignments was the handling of a number of invalid (zero) values, to compensate for those places where no value was available. To compensate for these errors, we use an interpolation procedure like that of Shepard [11]. The mathematical formulation of this method is presented in equation 1. Besides, we weight the horizontal neighbours more than the vertical ones, because the laser range finder beam extends more in the horizontal than in the vertical direction, see Tab. 1.

azi =

zj ∗ d−p j

j=1 n X

−s(θ)c(ϕ) cos(θ) s(θ)s(ϕ) 0

s(ϕ) 0 c(ϕ) 0

 −dz s( ϕ2 )  0  ϕ ϕ  −dz s( 2 )c( 2 ) 1

where c and s are cos and sin respectively, θ is the angle between two laser beams and ϕ is the current angle of the Pan-Tilt-Unit, dz is a vertical vector to the coordinate origin. After multiplication with the laser scanner data (dataLRF ), the following structure results, which can be simply implemented and takes up only little computing time:

Tab. 1 Laser beam diameter.

n X

system for both systems. This choice of the coordinate origin later permits an easier portability of the TASER coordinate system. The [x, y, z] positions are calculated through trigonometrical functions and rotation around the y-axis. We use only the pitching direction, so only the TiltUnit of the PTU is used. The resultant matrix is presented below.

(1) d−p j

j=1

where azi is an average for the point i, p is an exponent of the distance, dj is a Euclidean distance to n neighbours and zj is a value of n next neighbours. The resolution of both systems can be changed according to demand. This depends only on the resolution of the respective movement platform, see section I. A plane with 682 data points is scanned in 100 ms by the 2D laser range finder, a rotating mirror device. Scanning the environment with a laser scanner and a Pan-Til-Unit or robot manipulator is done in a stop-scan strategy. To recognise the 3D surroundings, the position of the moving platform is continuously changed. To create an exact and consistent model of the environment, the scans have to be merged into one coordinate system. This process is called registration and described in [12]. The solutions for the registration problem depend on the respective movement platform and were realised for both systems separately in the present work. First we introduce the solution for the system which uses the Pan-Tilt-Unit as a moving platform. We define the coordinate origin on the ground under the middle of the laser range finder sensor and use a right-handed 3D coordinate

    x dataLRF cos(ϕ)cos(θ) − dz sin( ϕ2 ) y    dataLRF sin(θ)  =   z  dataLRF cos(θ)sin(ϕ) − dz sin( ϕ )cos( ϕ ) 2 2 1 1 For the second system, we use the data of the robot arm. During the manipulator movements along a straight line in the Cartesian coordinates, the data of the TCP (tool center point) are permanently available. When these are extended to the position of the laser scanner sensor, the following homogeneous matrix results.   nx sx ax px + d s n x  ny sy ay py + d s n y     nz sz az pz + d s n z  0 0 0 1 where n is the normal vector, s is the orientation and a the approach vectors, p is the vector between the coordinates’ origin and the TCP, and ds is the distance from the TCP to the sensor of the laser range finder. The advantage of this matrix lies in the easy transformation to the main coordinate system, the coordinate system of TASER. IV. I MPLEMENTATION The workflows of both three-dimensional reconstruction systems developed in this paper are similar and are introduced in Fig. 5 relevant to the Pan-Tilt-Unit solution. In both cases, we use the drivers of the laser scanner and acquire the data in C/C++. In addition, the SCIP 2 communication protocol [13] is used, which permits us to use the larger laser scanner range (to 5600 mm) and the MD commands, which guarantees the synchronisation of the laser range finder data with the remaining system. With these commands, the laser scanner first scans the environment, then sends a signal to the host and after that the newly acquired data.

Moreover, we paid attention to the fact that the laser scanner was power-connected early, so that errors could be excluded during the “warm-up” phase. MAIN

Data Acq.

PTU

Chg. Pos.

LRF

Data Acq.

Err. Corr.

2D Pos. Visualisation 3D Pos.

Fig. 5.

Simplified flow-chart of our 3D reconstruction system.

Because the implementation happened in different ways, first the system with the Pan-Tilt-Unit and later that with the robot arm as moving platform will be introduced and discussed. For all our experiments, we use the same table scene, see Fig. 6

Changing the necessary system parameters like the angle step, trajectory of the movable platform or number and choice of the necessary laser beams can be effected comfortably from outside of the application. Enhancement by a graphical user interface is planned. The whole application is implemented in C/C++ and runs under Linux. We also thought of the subsequent processing of the unorganised point clouds, thus to first evaluate the final outcome we used the Delauny triangulation [14] which is based on the Voronoi diagram [15]. The used algorithm is implemented in Matlab and is licensed by BSD[16]. The second application based on the robot arm receives the data in the same way as the other system. However, for later offline processing Java is used. The distance data are registered and converted into three-dimensional voxels. As a result, it is possible to show the scene in the form of lines or polygons. The visualisation is effected with the help of Java3D. In the next section, we introduce and discuss our results for 3D reconstruction. V. E XPERIMENTAL R ESULTS Both developed systems inherit the maximal errors of their single components. The laser scanner error during the “warm-up” phase amounts to 10 - 15 mm. After the “warmup” phase we calculate the errors and perform the error analysis common in physical sciences [17]. First the error of the single measurements was calculated with equation 2: v u n u1X s=t (xi − x ¯)2 (2) n i=1 where x ¯=

Fig. 6.

Original image of the table scene used for all our experiments.

First, the position of the laser scanner is continuously changed by the Pan-Tilt-Unit and the laser data acquired. The data are the distances of the laser sensor center to the obstacles in the surroundings. Then the invalid (zero) values, see section II, are corrected by a described windowing method and the data saved in a simple data structure. The positions of single voxels are determined by multiplication with the matrix shown in section III. After this, data like [x1 , y1 , z1 , . . . , xn , yn , zn ] are passed on to the main system. The main system visualizes the voxels, triangles or polygons with OpenGL. The system adapts itself dynamically, so that the number of the points can vary depending on the user demands and settings of the moving platform. The advantage of the present software architecture lies in its modularity and universality. The architecture is easy to modify by the exchange of single modules. Furthermore, each module encapsulates its algorithms and for the integration within the architecture, only the input and output of the module has to be adapted to the interface. Thus, the basic construction process is always preserved. This facilitates the debugging of a single component as well as that of the complete system.

1 n

n X

xi is an average over n = 100 measure-

i=1

ments and xi is a value of the measurements. The average error of single measurements of the laser scanner amounts to 2.736 mm at a value average of 1908.69 mm. s s¯ = √ n

(3)

The standard deviation from the average was 0.274 mm, calculated with equation 3. The experiments have shown that the capture area of the laser range finder lies in the interval between 20 to 5000 mm. Beyond 5000 mm, only well-reflecting surfaces are recognized, and only with a large error. In the distance between 20 mm and 1 m, the precision is maximally 10 mm. Beyond that, the maximum error is 1% of the measured distances. The maximal measurement error of the Pan-Tilt-Unit and the repeat accuracy of the MHI PA10-6C amounts to 0.0129o and +/ − 0.1 mm respectively. For the demonstration of the results the same table scene was used for both estimates, see Fig. 6. The results for the laser scanner and the robot arm as moving platform are presented in Fig. 7. It shows the

range finder is limited in respect to the construction and amounts to around 124.1o , the resolution of the Pan-TiltUnit can be adapted. Therefore, the maximum viewing range of the complete system amounts to 124.1o x 77.7o . The resulting algorithm is implemented in C/C++ and visualised with OpenGL.

Fig. 7. Reconstructed 3D unorganised point cloud, line representation and the first segmentation attempts. The perspectives are changed to give a better view.

unorganised point clouds with around 70, 000 voxels, line representation as well as the first segmentation attempts from different perspectives. In the reconstruction the background was removed for a better view, thus only the table scene is visible.

Fig. 9. View of Delauny triangulations with different numbers of unorganised points and different scales. On the left the point clouds with 62,303 voxels, on the right those with 104,277 voxels.

For the first evaluation, we used the results of this system and applied the Delauny triangulation described in section IV. The results for 62, 303 and 104, 277 voxels are shown in Fig. 9. A graphical user interface (GUI) with the possibility of changing the parametres for the resolution of the laser scanner and visualisation of the results is planned for both systems and will be realised in the near future. VI. CONCLUSIONS AND FUTURE WORK

Fig. 8. The reconstructed 3D unorganised point cloud, front view at the top and side view at the bottom of this image. The perspective is changed to give a better view.

Because of the limited range of 931 mm of the robot arm [11], only one share of the cup can be seen. The viewing range of the laser scanner remains consistent with 240o , the resolution of the arm movement can be adapted. The resulting algorithm is implemented in Java and visualised with Java3D. Figure 8 presents the result of the laser scanner and PanTilt-Unit based system, which is unorganised point clouds with 213, 774 voxels. The perspective was changed for the purpose of a better view. The viewing range of the laser

In this paper, we presented two possibilities of simulating a 3D laser scanner based on a 2D laser scanner and different moving platforms. The results are satisfactory in both cases and offer a huge benefit for robotics. The calculated mathematical bases are correct and permit an application dynamically adaptable to different situations. Both applications are stable and cause no overflow, the performance depends on the number of voxels and is proportional to the resolution. Scanning ranges and options can be changed or adapted on the fly. The temporal performance of the complete system based on the laser scanner and PanTilt-Unit is presented in Fig. 10. Besides, the horizontal resolution of 124.1o was maintained and the vertical resolution was continuously changed, see diagram. The results for the other system are similar. The laser scanner needs a constant time factor for a complete scan, in this case 100 ms. The difference between both systems lies in the moving platform, the resolution and the time required for the aforementioned parameters. Both resulting systems are an alternative to a real 3D laser range finder. Though the application in airplanes [18] is not possible because the ranges are too short, there are many other interesting possibilities. For application in indoor and outdoor areas where only short distances are required, this kind of system can be used, for example in robotics. The systems feature a low error rate, are adaptable and easy to modify. In addition, the systems need no specialised components as all components can be easily replaced. The introduced systems are lighter and 5 to 10 times less expensive than commercial products. They offer an absolutely

comparable precision and field of view. One disadvantage exists in the time performance. 3D laser range finders are clearly quicker, though to varying degrees depending on their construction. For example, the RIEGL laser scanner achieves a temporal performance of up to 11, 000 pts/sec [19], the presented systems are slower with a factor of ca. 1.6 to 1.8. Our systems offer many advantages and are very attractive for use in the near future. They can be adapted to different requirements by equipping them with different sensors with varying qualities.

Fig. 10. The temporal performance of the system. The performance dependens on the number of voxels and is proportional to the resolution.

The resulting unorganised point clouds can be processed directly. The topic of surface reconstruction has been very widespread in CAD applications for several years. There are surface-oriented procedures [20] and volume-oriented methods which are mostly based on the quite appealing Delauny triangulation. Therefore, the object surfaces can be separated from the background and objects can be recognised and reconstructed. The trajectory calculation or collision avoidance can be realised in this way. For the handling of objects, a model of the objects is required. This can be achieved through the parallel use of both introduced systems. The view from two different perspectives greatly increases the possibility of a successful reconstruction of the model. It is possible to use the resulting model to calculate the possible grasps [21] which can be tried later by the real robot arm. Not only perception but also the interaction with the environment is thereby possible. Due to the lower error susceptibility and independence of external parameters like lighting conditions, even the interaction in human surroundings and with people under safety aspects becomes realizable. The advantage of this approach is that both systems are independent of each other and can be used separately or in combination. Subsequently, the robot can learn the handling of objects [22] or be extended with a different kind of memory [23]. In the next step, we will try to fuse the data of the laser scanner and the moving platform with the camera images. This also explains the construction with both cameras presented in Fig. 6. The sensor fusion of the data from the laser scanner and cameras would extend the possibilities of

robotics. R EFERENCES [1] Denis Klimentjew, Andre Stroh, Sascha Jockel, Jianwei Zhang ”RealTime 3D Environment Perception: An Application for Small Humanoid Robots”, In Proceedings of the 2008 IEEE International Conference on Robotics and Biomimetics (ROBIO), Bangkok, Thailand, pp. 354-359, February 21-26, 2009. [2] HOKUYO AUTOMATIC CO. LTD., 2009, http://www.sentekeurope.com/PDFs/URG-04LX.pdf [3] SOKUIKI Sensor Command., http://www.roboken.esys.tsukuba.ac.jp, last call 1/28, 2009. [4] C. Colombo, D. Comanducci, and A. Del Bimbo, ”A desktop 3D scanner exploiting rotation and visual rectification of laser profiles”. In Proceedings 2006 IEEE International Conference on Vision Systems ICVS06, New York, January 2006. [5] Reulke, Ralf; Wehr, Aloysius (2004): Mobile Panoramic Mapping Using CCD-line Camera and Laser Scanner with integrated Position and Orientation System. In: Maas, H.-G.; Schneider, D. [Hrsg.]: Proceedings of the ISPRS working group V/1, Panoramic Photogrammetry Workshop, Dresden (Deutschland), 2004-02-19 - 2004-02-22, ISSN 1682-1750, 2004. [6] Masamitsu Kurisu, Yasuyoshi Yokokohji, Yusuke Shiokawa, Takayuki Samejima: Development of a laser range finder for three-dimensional map building in rubble. Advanced Robotics 19(3): 273-294, 2005. [7] Janusz Bedkowski, Maciej Kretkiewicz, 3D laser range finder simulation based on rotated LMS SICK 200, RISE 2008. [8] Fabio Remondino: From point cloud to surface. International Workshop on Visualization and Animation of Reality-Based 3D Models, 24-28 February, Tarasp-Vulpera, 2003. [9] Higuchi, Masaru ans Kawamura, Takeya and Kaikogi, Takaaki and Murata, Tadashi and Kawaguchi, Masataka: Mitsubishi Clean Room Robot / Mitsubishi Heavy Industries, 2003. [10] Lee, Kyeong-Hwan and Ehsani, Reza: Comparison of two 2D laser scanners for sensing object distances, shapes, and surface patterns. Elsevier Science Publishers B. V., 2008. [11] SHEPARD, D. (1968), A two-dimensional interpolation function for irregularly-spaced data. Proceedings ACM National Conference, p. 517-524, 1968. [12] P. Besl and N. McKay, A method for Registration of 3-D Shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2):239 - 256, 1992. [13] SCIP2 communication protocol, SOKUIKI Sensor Command, http://www.roboken.esys.tsukuba.ac.jp, 2009. [14] B. Delaunay: Sur la sphre vide, Izvestia Akademii Nauk SSSR, Otdelenie Matematicheskikh i Estestvennykh Nauk, 7:793-800, 1934. [15] G. Voronoi (1907). Nouvelles applications des paramtres continus la thorie des formes quadratiques. Journal for the pure and applied mathematics, 133:97-178, 1907. [16] L. Giaccari: Delaunay triangulator, 2008, http://www.mathworks.com [17] P. R.Bevington: Data Reduction and Error Analysis in the Physical Sciences, NY, Mc-Graw-Hill, 1969. [18] Maas (2003): Planimetric and height accuracy of airborne laserscanner data: User requirements and system performance. Photogramm, University Stuttgart, 2003. [19] RIEGL, http://www.riegl.com, 2009. [20] H. Hoppe and T. Derose and T. Duchamp and J. A. McDonald and W. Stuetzle: Surface Reconstruction from Unorganized Points, 1992. [21] T. Baier and J. Zhang. Learning to grasp everyday objects using reinforcement-learning with automatic value cut-off. IEEE/RSJ International Conference on Intelligent Robots and Systems San Diego, CA, USA, Oct 29 - Nov 2, 2007. [22] M. Hueser, T. Baier, J. Zhang: Learning of demonstrated Grasping Skills by stereoscoopic tracking of human hand configuration, 2006 IEEE International Conference on Robotics and Automation (ICRA), Orlando, Florida, USA, May 15-19, 2006. [23] S. Jockel, M. Mendes, J. Zhang, A. P. Coimbra, and M. Crisstomo ”Robot Navigation and Manipulation based on a Predictive Associative Memory”, In Proceedings of the 2009 IEEE 8th International Conference on Development and Learning (ICDL), Shanghai, China, June 4-7, 2009.

Suggest Documents