Mobile Robot Navigation Based on Artificial Landmarks with Machine Vision System

World Applied Sciences Journal 24 (11): 1467-1472, 2013 ISSN 1818-4952 © IDOSI Publications, 2013 DOI: 10.5829/idosi.wasj.2013.24.11.7010 Mobile Robo...
Author: Peter Taylor
3 downloads 0 Views 481KB Size
World Applied Sciences Journal 24 (11): 1467-1472, 2013 ISSN 1818-4952 © IDOSI Publications, 2013 DOI: 10.5829/idosi.wasj.2013.24.11.7010

Mobile Robot Navigation Based on Artificial Landmarks with Machine Vision System Yudin Dmitriy Aleksandrovich, Postolsky Grigoriy Gennadievich, Kizhuk Alexander Stepanovich and Magergut Valeriy Zalmanovich Belgorod State Technological University Named After V.G. Shukhov, Russia, 308012, Belgorod, Kostyukova Street, 46 Submitted: Aug 10, 2013;

Accepted: Sep 7, 2013;

Published: Sep 18, 2013

Abstract: The paper describes machine vision system (MVS) developed by authors, which identifies artificial landmarks on images from a video camera with pan-tilt mechanism and allows to calculate the deviation of the robot from the set course. Lines limiting robot track and tags in the form of two-dimensional barcodes were selected as artificial landmarks. Designed MVS is a hardware-software complex, which includes one camera with pan-tilt mechanism and onboard computer, which implemented the software that allows to search for artificial landmarks in the environment and to identify them at the image. Also authors developed a special software provides control of the drive wheels of the robot in order to eliminate the course deviation calculated on the basis of information about artificial landmarks. The paper shows that the proposed machine vision system for mobile robot navigation is economical, have acceptable performance and can be applied in the real-time control systems for warehouse automated guided vehicles, service robots, security robots, mobile robots operating in hostile and dangerous to human health environments. Key words: Mobile robot Navigation Bar code Software

Machine vision system

INTRODUCTION Today an important direction in the task of automatic positioning and navigation of mobile robots is usage of machine vision systems (MVS) with camera sensors [13]. Main causes of MVS success are the growth of computing resources, increasing the resolution of the cameras, as well as the advent of standardized libraries of machine vision, such as Matlab Image Toolbox [4], OpenCV [5], ReacTVision [6] etc. Navigation and positioning of mobile robots with machine vision systems are usually carried out with natural or artificial landmarks. The main problem of navigating with natural landmarks is the detection and comparison of the characteristics of the images from the cameras in noisy environments, changes in light of the observed scene and the impact of other adverse factors. Corresponding Author:

Image recognition

Artificial landmark

It is much easier to detect artificial landmarks [7, 8], as they are designed with a predetermined contrast, size and shape, for example, with two-dimensional bar code technology [9]. With the use of found landmark information mobile robot can be driven with local map, which already contains these landmarks. Robot calculates its current position and orientation after finding and matching landmarks on the map and the observed scene. Model of environment can be the map. The map is also can be built and expanded in the process of the robot motion. In this work authors chose the method of navigating a mobile robot with artificial landmarks using a video camera provides the flexibility to change the route, undemanding to the quality of flooring, low equipment cost and ease of installation, ease of implementation of the algorithm in comparison with the method of the orientation of the natural landmarks.

Yudin Dmitriy Aleksandrovich, Belgorod State Technological University Named After V.G. Shukhov, Russia, 308012, Belgorod, Kostyukova Street, 46. Tel: +79202007395, E-mail: [email protected] .

1467

World Appl. Sci. J., 24 (11): 1467-1472, 2013

(a)

(b)

(c)

(d)

(e) Fig. 1: Artificial landmarks examples a – QR-code, b - Aztec-code, c – code DataMatrix, d – code PDF417, e - code "Amoeba"

Fig. 2: A simplified block diagram of mobile robot control The usage of artificial landmarks on the basis of a two-dimensional bar code, for example (Fig. 1) [9], in the workshop and storage facilities, will allow the mobile robot to efficiently and reliably track its position in environment [10]. Vision system for mobile robot control. Fig. 2 shows a simplified functional diagram of a mobile robot control system based on the information from the machine vision system. The input parameters of the system are: M - method of movement:

The movement to landmark-code; The parallel movement to the landmark-line; The perpendicular movement to the landmark-line; L - track deviation from the landmark; ID – desired landmark code. If desired landmark code is not recognized in the image, the Camera positioning system Block generates a control signal for camera rotation to the other region and Robot motion control system Block produces no control. 1468

World Appl. Sci. J., 24 (11): 1467-1472, 2013

If the landmark code was recognized, Image recognition Block generates landmark coordinates at the image. Then motion control system and camera positioning system start working in parallel. Camera positioning system is designed to minimize the deviation of the focal axis of the camera away from the beam drawn from the camera lens to an artificial landmark. It generates control signals: HSET - master control of pan-servo; VSET - master control of tilt-servo.

(a)

As a result, the Pan-tilt camera mechanism Block outputs are camera orientation angles H and V. In turn this leads to a change of image I at the output of Camera Block located in the feedback channel of the overall system. The Robot motion control system Block realizes conversion of input variables M, L and C at the robot deviation angle of the route E, which in turn is input to the PID-controller forming control action U for the wheels drive of the robot [11]. Wheels Drive Block generates output signal W - current robot wheels turning angle. As a result, Robot Block will change the output value Y - the current position of the robot in environment. It also will change the image I at the output of the Camera Block. Landmark-line recognition algorithm is implemented in C++ using the library OpenCV. It is designed to recognize the boundaries of the route with single-color prohibited zones. The main steps of the algorithm are: Obtaining images from a video camera (Fig. 3a); Predetermined color areas selection (using a threshold transformation); Removal of noise encountered in the selection; Application of morphological dilate operation for detected regions; Same actions product to highlight the areas with the color of the route: the color selection, noise reduction and dilation (Fig. 3b); Location of the route boundaries by imposing the dilating areas with the color of the borders on the dilating areas with the color of the route; Location of the lines using Hough transform [12] on resulting image (Fig. 3c).

(b)

(c) Fig. 3: Illustration of the robot route boundaries recognition algorithm Algorithm implements objects selection by selecting a fixed range of hue, saturation and brightness for each of the predetermined colors. Such a method can speed up the algorithm, but it is sensitive to changes in light of the observed scene, glares from bright light sources, camera color reproduction changes. Detection of artificial landmarks based on twodimensional bar code may realized in different ways. Landmarks can be detected with etalon-matching methods based on the singular points and their descriptors, such as the method SURF [13]. However, such techniques are low-speed for the task and not always correctly detect the image singular points [14].

1469

World Appl. Sci. J., 24 (11): 1467-1472, 2013

Fig. 4: Screen form of the software to detect and reading QR-code

Fig. 5: Illustration robot course deviation from the landmark In this paper we apply the artificial landmark recognition algorithm based on the brightness characteristics analysis of the image cross-sections. Recognition algorithm of an artificial landmark as the two-dimensional bar code consists of the following steps:

Image black spots selection; Analysis of the pixels brightness of image horizontal sections: the measurement of the amplitude and pulse period; Detecting areas with alternating pulses having a predetermined amplitude and period.

Receiving image from a camera and convert it into a grayscale image;

Fig. 4 shows the result of detecting and reading artificial landmark as QR-code with developed software. 1470

World Appl. Sci. J., 24 (11): 1467-1472, 2013

Fig. 6. Image coordinate system When the robot moves directly on the landmark value of the robot deviation angle of the route E is calculated as the angle between the robot axis and the landmark axis, drawn from the camera through the center of the landmark (Fig. 5). Robot deviation angle E is calculated as follows: E=

1

+

2

,

where [alpha]1 - the current rotation angle of the camera horizontally; [alpha]2 - the angle between the camera's focal axis and the landmark axis. Coordinate image system is shown on Fig. 6. The origin is at the center of the image. Coordinates are not expressed at the relative coordinates from Xmin = -1 to Xmax = 1 in order not to bind algorithm to the current image resolution. Angle [alpha]2 is calculated by the following formula:  X  tg  , = arctg   X max 2  where X - the relative x-coordinate of the landmark; camera viewing angle horizontally. 2

- the

CONCLUSION The proposed machine vision system tested on a mobile robot with a differential drive at the department "Technical Cybernetics" of BSTU n.a. V.G. Shukhov. On-board computer was laptop with an Intel Core 2 Duo processor with a core speed of 1.3 GHz with a 32-bit

operating system Windows 7. The developed software has allowed to recognize artificial landmarks at the images with size of 640x320 pixels. Recognition speed is up to 30 frames per second for landmark like “Amoeba” and up to 10 frames per second for QR-code. It can be concluded that MVS work results is suitable for use in the real-time mobile robot control system. Summary. Article describes developed machine vision system for mobile robot navigation. MVS identifies artificial landmarks on images from a video camera with pan-tilt mechanism and allows to calculate the deviation of the robot from the set course. We consider a landmarks as the robot track boundary lines and tags in the form of two-dimensional barcodes. Algorithm based on the analysis of the brightness characteristics of image cross-sections was used for detection of artificial landmarks in the form of two-dimensional bar codes. Algorithm based on the operations of threshold image transformations, morphological expansion and Hough transform was used for recognizing the robot track boundary lines. The proposed machine vision system for mobile robot navigation based on artificial landmarks is cheap and can be used as part of control systems of warehouse automated guided vehicles, service robots, security robots, mobile robots operating in hostile and dangerous to human health environments. ACKNOWLEDGEMENTS The work was performed under the grant RFBR # 12-07-97526-r_tsentr_a “Information and computing intelligent control system of robotic vehicles to solve

1471

World Appl. Sci. J., 24 (11): 1467-1472, 2013

logistical tasks of industrial and agricultural production”, grant # B-10/13 of Strategic development program of BSTU n.a. V.G. Shukhov 2012-2016 (# 2011-PR-146). REFERENCES 1.

2.

3.

4. 5.

6.

7.

8.

9.

Kragic, D. and M.Vincze, 2010. Vision for Robotics, Foundation and Trends in Robotics Now Publishers, 1(1): 1-78. Tkachenko, A.I., 2008. Variant navigatsii mobilnogo robota s pomoshchyu kamery [Variants of mobile robot navigation with a camera] // Journal of Computer and Systems Sciences International, 4: 139-145. Petukhov, S.V., 2008. Metody avtonomnoy navigatsii pri popyatnom dvizhenii robota po zapomnennym oriyentiram na proydennoy trayektorii [Methods of autonomous navigation with retrograde robot movement with memorized path landmarks] // Mekhatronika, avtomatizatsiya, upravleniye [Mechatronics, Automation, Control], 8(89): 30-34. Gonzalez, R.C., 2009. Digital image processing using MATLAB (2nd Ed.). Lavoisier, pp: 826. Bradski, G. and A. Kaehler, 2008. Learning OpenCV: Computer Vision with the OpenCV Library. O'Reilly Media, pp: 580. ReacTVision - a toolkit for tangible multi-touch surfaces [web-site]. Available at: http://reactivision. sourceforge.net/. Pau, L.F. and P.S.P. Wang, 1993. Handbook of Pattern Recognition and Computer Vision. World Scientific, pp: 984. Gutierrez, J. and B. Armstrong, 2008. Precision Landmark Location for Machine Vision and Photogrammetry: Finding and Achieving the Maximum Possible Accuracy. Springer, pp: 162. Akchurin, V.A., 2012. Razrabotka sistemy rasshirennoy realnosti dlya modelirovaniya trekhmernykh stsen [Development of augmented reality system to simulate three-dimensional scenes]. Available at: http://masters.donntu.edu.ua/ 2012/iii/akchurin/diss/index.htm.

10. Rubanov, V.G., 2011. Sistemnyy podkhod k proyektirovaniyu upravlyayemykh mobilnykh logisticheskikh sredstv, obladayushchikh svoystvom zhivuchesti. The system approach to the design of controlled mobile logistics assets that have the survivability property. Nauchnyye vedomosti BelGU. Seriya: Istoriya, Politologiya, Ekonomika, Informatika. Scientific Statement BSU. Series: History, Political Science, Economics, Computer Science. #1 (96). V. 17/1, pp: 176-187. 11. Denisov A.Y., S.A. Griban and D.A. Yudin, 2012. Razrabotka sistemy soglasovannogo upravleniya dvigatelyami differentsialnogo privoda mobilnoy platformy [The development of coherent motors management of mobile platform differential drive] / Mezhdunarodnaya nauchno-tekhnicheskaya konferentsiya molodykh uchenykh BGTU im. V.G. Shukhova [International Scientific and Technical Conference of Young Scientists BSTU n.a. V.G. Shukhov]. Belgorod. 12. Vizilter Y.V., S.Y. Zhetov, A.V. Bondarenko, M.V. Ososkov, A.V. Morzhin, 2010. Obrabotka i analiz izobrazheniy v zadachakh mashinnogo zreniya: Kursa lektsiy i prakticheskikh zanyatiy [Image processing and analysis tasks in machine vision: a course of lectures and workshops]. Moscow, Fizmatkniga, pp: 672. 13. Bay, H., T. Tuytelaars and L. Van Gool, 2006. Surf: Speeded up robust features. European Conference on Computer Vision, Belgium, pp: 404-417. 14. Shapovalov A.V., A.A. Shevkunov, V.V. Protsenko and D.A.Yudin, 2013. Primeneniye metoda SURF dlya identifikatsii oyekta po shablonu v sisteme raspoznavaniya izobrazheniy [Application of SURF to identify the template object for image recognition system]/ Mezhdunarodnaya nauchnotekhnicheskaya konferentsiya molodykh uchenykh BGTU im. V.G. Shukhova [International Scientific and Technical Conference of Young Scientists BSTU n.a. V.G. Shukhov]. Belgorod.

1472

Suggest Documents