Autonomous Gait Planning for a Hexapod Robot in Unstructured Environments based on 3D Terrain Perception

The 14th IFToMM World Congress, Taipei, Taiwan, October 25-30, 2015 DOI Number: 10.6567/IFToMM.14TH.WC.OS13.050 Autonomous Gait Planning for a Hexapo...
Author: Annabelle West
1 downloads 0 Views 3MB Size
The 14th IFToMM World Congress, Taipei, Taiwan, October 25-30, 2015 DOI Number: 10.6567/IFToMM.14TH.WC.OS13.050

Autonomous Gait Planning for a Hexapod Robot in Unstructured Environments based on 3D Terrain Perception Xun Chai∗ Shanghai Jiao Tong University Shanghai,P.R. China

Feng Gao† Shanghai Jiao Tong University Shanghai,P.R. China

Abstract— Legged robots have extremely important applications in unstructured environments, e.g. accomplishing rescuing and detecting tasks in the earthquake and nuclear accidents instead of human beings. In such environments, legged robots must have the potential to plan gaits autonomously based on the site environment without detailed instructions from a human operator. In this paper, we present an autonomous gait planning method for the hexapod robot “Octopus”, which will be applied in unstructured environments. The method is based on the terrain map perceived by a 3D vision system. Further, two indices are defined to describe terrain conditions. An important part of the method is a foothold selection function, which considers the influence of terrain conditions. The detailed walking process of Octopus on the rough terrain is discussed too. At last, we apply our method to Octopus, and show that it allows Octopus to pass through the uneven terrain autonomously. Keywords: Autonomous gait planning, Hexapod robot, Unstructured environments, 3D terrain map

I. Introduction Legged robots have excellent terrain adaptability and are able to accomplish tasks, such as opening valves, clearing obstacles, carrying heavy equipment, detecting dangerous situations in harsh environments instead of human beings. A human operator in a safe place can send simple task instructions to the legged robot in the field, while it is impossible to send detailed data to the robot because of the restriction of remote transmission. For this the legged robot must have the ability to adjust its feet and body to adapt to complex terrains autonomously. Some studies have made significant progresses in the issue of terrain adaption for legged robots. The famous quadruped robot Big Dog [1], developed by Boston Dynamics, can pass through rough terrains using the trot gait based on the dynamics balance algorithm. RHex [2], [3] is a compliant-legged hexapod robot which can traverse rough terrains without complex control algorithms. The compliant legs can reduce the impact of external force on RHex and provide it with impressive ter∗ [email protected][email protected][email protected] § [email protected]

Yang Pan‡ Shanghai Jiao Tong University Shanghai,P.R. China

Yi-lin Xu§ Shanghai Jiao Tong University Shanghai,P.R. China

rain adaptability. Yasuhiro Fukuoka, Hiroshi Kimura and Avis H. Cohen [4] used a PD controller at the joints of a quadruped robot. The PD controller could construct the virtual spring-damper system as the visco-elasticity model of a muscle, which played an important role in the body stabilization and system energy storage. Apart from the bionics design in the mechanism, some researchers have also developed a novel control algorithm the central pattern generator (CPG) [5] inspired by the principal of the biological control scheme. The quadruped robot Tekken[6] can walk with a medium walking speed on the irregular terrain based on the neural system model consisting of a CPG and reflexes. Most of the related studies satisfy the requirement of adaption to complex terrains by using the balance control algorithms and the special design concept of legs. In recent years, due to the vast developments of vision sensors and force sensors, control methods combined with these sensors have been developed, which provide legged robots with the sensing ability. After having a good knowledge of the site environment, legged robots can select a safe path and a set of appropriate footholds initially, then plan the feet and body motions effectively in order to traverse rough terrains with high stability, velocity and low energy consumption. The use of sensors especially vision sensors and force sensors plays an important role in the robotics field, and is a higher level application of the biological engineering. There has been some work concerning the motion planning for legged robots based on vision sensors. The LittleDog [7], [8] uses the laser scanning system and the external motion capture system to model a high resolution terrain map around the robot, then the terrain map is processed in quantity. The control strategy of LittleDog includes three parts, a system that learns optimal footholds choices, a body trajectory optimizer and a floating-base inverse dynamics controller, which allows the LittleDog to perform a fast locomotion over the rough terrain. Stephane Bazeille et al. [9] proposed a control scheme for the quadruped robot HyQ, which performed the goal-oriented navigation on the unknown rough terrain by using the inertial measurement data and the stereo vision. The six-legged walking robot DLR [10], [11], [12] can navigate in rough terrains. The control system of DLR uses the SGM method to build the digital elevation map, then defines the traversal index and plans the path concerning about the information of legs di-

mension sizes, the body posture and the obstacles distances. In this paper, we propose an autonomous gait planning method for the hexapod robot Octopus, shown in Fig. 1, based on the site terrain detected by a 3D vision system. The structure of the paper is organized as follows. In section 2, we present a description of the robot system, including the mechanism structure of Octopus and the terrain map modeling. The detailed gait planning method is proposed in section 3. Section 4 describes the processes and results of the experiment. The paper is concluded in section 5.

with outstanding walking stability, load capacity and terrain adaption. U3 Universal joint U2 Universal joint

Motor P3 Prismatic joint U1 Universal joint

S3 Spherical joint

P2 Prismatic joint P1 Prismatic joint

Sf Spherical joint S2 Spherical joint

Fig. 2. The leg structure

Fig. 1. The Octopus robot

II. Description of the Robot System A. Prototype The hexapod robot is a six DOFS parallel-parallel moving platform integrated walking and manipulating, which is mainly designed for handling emergency tasks in hash environments. We call it Octopus, for its moving movements are like those of a real octopus. Octopus has a hexagonal body with six symmetrically distributed legs. All the legs have the same mechanism structure and dimension size, so Octopus is isotropic on six leg directions. Every leg has three chains, which is a parallel mechanism, shown in Fig. 2. The U2 P2 S2 chain and the U3 P3 S3 chain have the same mechanism structure and dimension size, each one is constructed by a universal joint, a prismatic joint and a spherical joint. Another chain, the U1 P1 chain, which is constructed by a universal joint and a prismatic joint, is mainly used to endure the external force. All the prismatic joints are actuated by motors, the linear movement of the prismatic joint is achieved by the ball-bearing screw. Every leg of Octopus has 3 DOFS in space. In order to perform sophisticated terrain adaption, a spherical joint Sf surrounded by a spring is proposed, which connects the foot with the leg and allows the foot to rotate ±35◦ along the leg. The special parallel-parallel mechanism design concept not only fulfills the DOF requirement but also provides Octopus

Octopus will be applied in environments inaccessible to human beings to perform rescuing and detecting tasks. Considering the restriction of remote transmission, a certain degree of intelligence is necessary for Octopus to choose appropriate footholds and plan the gait trajectory. A 3D vision system as a first step towards providing Octopus with the visual perception is equipped as shown in Fig. 3. In the real application environment, a human operator sends the task instruction to the host computer using a remote control equipment. The remote control equipment communicates with the host computer via Wi-Fi which is created by the host computer. The 3D vision system (i.e. a MS Kinect) is mounted at the top of Octopus, which can provide the objects’ distances in the range of the detection area. The 3D vision system is connected with the host computer via USB. The host computer sends command instructions to the slave computer, which runs a real-time Linux OS, via WiFi too. The slave computer sends planned data to drivers via EtherCAT, after planning the gait.

Host Computer

3D Vision System Slave Computer

Ball-bearing Screw

Ankle

Spring

Foot

Fig. 3. Octopus with the 3D vision system

B. Modeling of the Grid Terrain Map Accurate 3D coordinates of the terrain around Octopus, which can be transferred from the depth image captured by the 3D vision system, are essential for the autonomous gait planning. The original 3D coordinates with respect to the 3D vision sensor coordinate system need to be transferred in the Octopus coordinate system before planning the gait. For this the relative mounted position and orientation of the 3D vision system should be identified. The origins of the Octopus coordinate system and the 3D vision sensor coordinate system are both difficult to be measured. So the geometric relationship between these two coordinate systems, which is represented by an identification matrix R V T, can hardly be obtained by direct measurements. An indirect method is used to solve the identification problem instead. We use the absolute ground as a reference target as Fig. 4 shows. The ground position and orientation with respect to Octopus can be computed using the kinematics models, and the ground position and orientation with respect to the 3D vision system can be computed by processing the point cloud too. The ground with respect to Octopus can be transferred in the 3D vision sensor coordinate system by matrix R V T, and the transferred ground is called the theoretical ground. The ground detected by the 3D vision system is the measured ground. R V T can be calculated based on the coinciding relationship between the theoretical ground and the measured ground.

-1.5m∼+1.5m, should be mapped into corresponding grids correctly. The grid height is obtained by calculating the average of the y coordinates of all the point cloud located in the same grid. Two kinds of unreasonable grids may exit and influence the precision of the gait planning. One kind is the wrong data due to the environments noise and defects of the 3D vision system itself, the other kind is the missing portions caused by inevitable effect of occlusions in the sensors line of sight. The grid terrain map has to be processed further in order to exclude unreasonable grids. The refresh rate of the grid map is about 20 FPS. Fig. 5 shows the real terrain around Octopus and the grid terrain map modeled by the above method.

Real Terrain Transferring

Point cloud Grid terrain map building

Grid terrain map Processing

Grid terrain map after processing

Fig. 5. Modeling of the grid terrain map

Fig. 4. The identification method

The raw terrain map is constituted by large amounts of point cloud data (have been transformed into the Octopus coordinate system by R V T), which takes a long time for the computer to restore and process. For this reason, a grid terrain map is used to decrease the amount of data and increase the speed of map updating and processing. The grid terrain map is structured by a set of regular 3-dimensional grids. The underside of the grid is a square, and the grid height denotes the real height of the terrain. Considering the performance of Octopus, we regulate the dimension size of the square is 0.025m × 0.025m, and the whole map size is 120 × 120, so the real dimension size of the grid terrain map is 3m × 3m. All the point cloud data, whose x coordinates are less than 3m and z coordinates are in the range of

III. Autonomous Gait Planning Method Octopus uses the tripod gait when it walks as Fig. 6 shows, which is quite advantageous to hexapod robots. The tripod walking gait enables Octopus to have three nonadjacent feet touching with the ground all the time, the center of mass (COM) can be located in the support triangle easily.

Fig. 6. The tripod walking gait

The gait trajectory is very important for the walking stability and speed. The rectangular trajectory is typical but quite useful, as shows in Fig. 7. The determination of the rectangular trajectory only needs three parameters, the step length a, the height of lifting foot b and the height of falling foot c, which facilitates the planning process.

Leg

Step Length a Foot

Height of lifting foot b

Gait Trajectory

Height of falling foot c

Grid Terrain Map Current FootHold

Next FootHold

Fig. 7. The rectangular gait trajectory

Fig. 8. Description of the terrain condition

The step length a is attempted to be long enough to increase the walking speed. The maximum step length Smax of Octopus is restricted by the working space of the foot. The index LR in formula 1 is defined to describe the influence of Smax on a. p (x(i, j) − x(i0 , j0 ))2 + (y(i, j) − y(i0 , j0 ))2 LR = Smax (1) where x(i, j) and y(i, j) are coordinates of the next foothold, x(i0 , j0 ) and y(i0 , j0 ) are coordinates of the current foothold. LR must be less than 1 according to formula 1, otherwise it is an error value. On the other side, terrain conditions should be considered when selecting the appropriate foothold in order to avoid the foot slippage and maintain high stability. Two indices, the terrain inclination θ in formula 2 and the terrain roughness R in formula 3, are employed to describe terrain conditions. |y(i,j)-y(m,n)| ) ∆ m = i − 1, i, i + 1; n = j − 1, j, j + 1

θ = max(

(2)

where m, n are not equal to i, j at the same time. As Fig. 8 shows, y(i,j) is the height of the next foothold grid, y(m,n) is the height of the grid which is around the next foothold grid, and ∆ is the distance between the two adjacent grids. There are 8 other grids around the next foothold grid, so 8 inclinations can be computed, the maximum one is chosen as the terrain inclination of the next foothold grid. As Fig. 8 shows, the terrain roughness R of the next foothold grid can be calculated from formula (3). r 1X R= [y(i, j) − y(m, n)]2 (3) 8 m = i − 1, i, i + 1; n = j − 1, j, j + 1 where m, n are not equal to i, j at the same time too.

The selection function of the next foothold is defined by the following equation 4. The foot is attempted to be placed at a further possible foothold in order to enable Octopus to walk at a high speed, and bigger LR results in a long step length. Minimum inclination and roughness are desired for the next foothold, which can avoid the foot slippage. So the grid (i, j) is chosen as the next foothold by calculating the maximum value of E(i, j) from formula 4. E(i, j) = LR +

Rmax θmax + θ R

(4)

where θmax and Rmax are the max value of the inclination and roughness of the terrain. Then the step length a can be computed using formula 5. a = |x(i, j) − x(i0 , j0 )|

(5)

The height of lifting foot b can be calculated from formula 6, ymax is the height of the highest grid between the current foothold and the next foothold as Fig. 9 shows. δs is added to ymax in order to avoid interfering with the terrain caused by some random errors. The height of falling foot c is determined in formula 7, y(i, j) is the height of the next foothold. b = ymax + δs (6) c = |ymax − y(i, j)|

(7)

After obtaining the gait trajectory, positions of leg actuators (prismatic joints) should be computed to control Octopus to walk according to the planned trajectory. The inverse kinematics is needed, and our previous work [13], [14], [15] have finished the kinematics model of Octopus. Fig. 10 shows the kinematic model of single leg, the position of the spherical joint Sf can be calculated from the gait trajectory. l1 , l2 and l3 are the lengths of prismatic joints P1 , P2 and P3 . The geometric constraint equation of the inverse kinematics is as equation (8) shows  T T [x y z 1] = L A Sfx Sfy Sfz 1 (8)

equations. −−− −→ A − L L l2 = U2 S2 = L A S2 − L U2 −−− −→ A − L L l3 = U3 S3 = L A S3 − L U3

where L U2 , L U3 , L S2 , L S3 are positions of joints U2 , U3 , S2 and S3 with respect to L-CS respectively, and A S2 , A S3 are positions of joints S2 and S3 with respect to A-CS respectively. The following equation 12 is used to transform the gait trajectory from the ground coordinate system (G-CS) to the L-CS. L R G Γ=L (12) R TG T Γ

Fig. 9. Model of the trajectory planning

U3 U2

y

P2 z

L(U1)

P1 x

α1

β1

P3 l2

l3

l1

y

S3 S2 A

z

Sf (x,y,z) x

Fig. 10. Kinematic model of single leg

where L A is the transformation matrix from the ankle coordinate system (A-CS) to the leg coordinate system (L-CS), Sfx , Sfy and Sfz are coordinates of Sf with respect to ACS, (x, y, z) denotes the position of Sf with respect to LCS. L

A=  cosα1 cosβ1  sinβ1  −sinα1 cosβ1 0

 l1 cosα1 cosβ1  l1 sinβ1  −l1 sinα1 cosβ1  1 (9) Formula 9 shows the detailed expression of L A, α1 , β1 are rotation angles of U1 along two vertical directions. The solutions of the inverse kinematics in formula 10 can be obtained by solving equation 8. q l1 = x2 + y2 + z 2 − Sfy 2 − Sfz 2 − Sfx   y  β1 = arcsin  q (l1 + Sfx )2 + Sfy 2   Sfy  − arcsin  q (l1 + Sfx )2 + Sfy 2 "   # Sfz x − (l1 + Sfx ) cosβ1 − Sfy cosβ1 z  α1 = arctan  (l1 + Sfx ) cosβ1 − Sfy sinβ1 x + Sfz z (10) L A can be calculated by substituting α1 , β1 and l1 into equation 9. l2 and l3 can be obtained from the following −cosα1 sinβ1 cosβ1 sinα1 sinβ1 0

sinα1 0 cosα1 0

(11)

where G Γ represents the gait trajectory in the G-CS, and L Γ represents the gait trajectory in the L-CS. R G T is the transformation matrix from the G-CS to the robot coordinate system (R-CS), and L R T is the transformation matrix from the R-CS to the L-CS. R G T is related to the body traT is a constant matrix which can be calculatjectory, and L R ed based on the dimension size of Octopus. As Fig. 6 shows, a whole walking gait of Octopus consists of two stages. In stage one, it moves feet 1, 3, 5, feet 2, 4, 6, are fixed on the ground. In this stage, prismatic joints of legs 2, 4, 6 are actuated to enable the body of Octopus to move forward, G Γ of feet 2, 4, 6 are fixed and R G T can be computed based on the body trajectory, then L Γ is obtained from the equation 12. Positions of prismatic joints of legs 2, 4, 6 can be obtained from equations 10 and 11. In stage one, G Γ of feet 1, 3, 5 are determined from formulas 5, 6, and 7, L Γ is obtained from the equation 12, and positions of prismatic joints of legs 1, 3, 5 can be calculated from equations 10 and 11. In stage two, Octopus moves feet 2, 4, 6, and feet 1, 3, 5 are fixed on the ground in turn. Positions of prismatic joints of all the legs can be computed in the same way as stage one. IV. Experiment In this section, experimental results of the proposed method are presented. Figure 11 shows the experimental environment, the obstacle in front of Octopus is constructed by three bricks, whose dimension size is 1800mm × 300mm×100mm (length×width×height). The grid terrain map is as figure 5 shows. Fig. 11 shows the whole process of passing over the obstacle, the body of Octopus is regulated to move forward horizontally. During the whole process, all the feet are placed at the planned footholds calculated based on the proposed method, so its body maintains nearly horizontal when walking on the obstacle. The results show a successful application of the method to the autonomous gait planning for Octopus in unstructured environments, which validates our theory.

Fig. 11. Snapshots of Octopus passing through obstacles

V. Conclusion In this paper we present an autonomous gait planning method for the Octopus hexapod robot, which enables it to walk in unstructured environments with a high walking stability. The method is based on the terrain map detected by a 3D vision system. And the terrain map is processed in order to facilitate the gait planning. We also define two indices, the terrain inclination and the terrain roughness, to describe terrain conditions. The foothold selection function concerning about the terrain conditions and Octopus’ own performance can provide Octopus with appropriate footholds. Experimental results show that the autonomous gait planning method works well in reality, Octopus can pass through the complex terrain consisting of obstacles steadily.

[9]

[10] [11]

[12]

[13]

[14]

References [1] [2] [3] [4]

[5] [6]

[7]

[8]

Raibert M, Blankespoor K, Nelson G, et al. Bigdog, the roughterrain quadruped robot In Proceedings of the 17th World Congress, pp. 10823–10825, 2008 Saranli U RHex: A Simple and Highly Mobile Hexapod Robot The International Journal of Robotics Research, 20:616–631, 2001 Bogdan Rusu R, Sundaresan A, Morisset B, et al. Leaving Flatland: Efficient real-time three-dimensional perception and motion planning Journal of Field Robotics, 26:841-862, 2009 Fukuoka Y, Kimura H and Cohen A H Adaptive Dynamic Walking of a Quadruped Robot on Irregular Terrain Based on Biological Concepts The International Journal of Robotics Research, 22:187–202, 2003 Ijspeert A J Central pattern generators for locomotion control in animals and robots: a review Neural Networks, 21:642–653, 2008 Gay S, Santos-Victor J and Ijspeert A Learning robot gait stability using neural networks as sensory feedback function for Central Pattern Generators In Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on, pp. 194-201, 2013 Kolter J Z, Rodgers M P and Ng A Y A control architecture for quadruped locomotion over rough terrain In Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on, pp. 811– 818, 2008 Kolter J Z, Kim Y and Ng A Y Stereo vision and terrain modeling for quadruped robots In Robotics and Automation, 2009. ICRA’09. IEEE International Conference on, pp. 1557-1564, 2009

[15]

Bazeille S, Barasuol V, Focchi M, et al. Vision enhanced reactive locomotion control for trotting on rough terrain In Technologies for Practical Robot Applications (TePRA), 2013 IEEE International Conference on, pp. 1–6, 2013 Stelzer A, Hirschmuller H and Gorner M Stereo-vision-based navigation of a six-legged walking robot in unknown rough terrain The International Journal of Robotics Research, 31:381–402, 2012 Chilian A and Hirschmuller H Stereo camera based navigation of mobile robots on rough terrain In Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on, pp. 45714576, 2009 Chilian A, Hirschmuller H and Gorner M Multisensor data fusion for robust pose estimation of a six-legged walking robot In Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on, pp. 2497-2504, 2011 Pan Y and Gao F A new 6-parallel-legged walking robot for drilling holes on the fuselage Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 228:753–764, 2013 Yang P and Gao F Leg kinematic analysis and prototype experiments of walking-operating multifunctional hexapod robot Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 228:2217–2232, 2013 Pan Y, Gao F and Du H Fault tolerance criteria and walking capability analysis of a novel parallel hexapod walking robot Robotica, 2014

Suggest Documents