Visual Servoing with Time Delay

ISSN 0280-5316 ISRN LUTFD2/TFRT--5734--SE Visual Servoing with Time Delay Rémi Aguesse Department of Automatic Control Lund Institute of Technology...
Author: Iris Williamson
1 downloads 0 Views 2MB Size
ISSN 0280-5316 ISRN LUTFD2/TFRT--5734--SE

Visual Servoing with Time Delay

Rémi Aguesse

Department of Automatic Control Lund Institute of Technology December 2004

Department of Automatic Control Lund Institute of Technology Box 118 SE-221 00 Lund Sweden

Document name

MASTER THESIS Date of issue

December 2004 Document Number

ISRNLUTFD2/TFRT--5734--SE Author(s)

Supervisor

Rémi Aguesse

Rolf Johansson and Tomas Olsson at LTH in Lund Nicolas Andreff from the IFMA in France Sponsoring organization

Title and subtitle

Visual Servoing with Time Delay (Robotseende vid tidsfördröjning)

Abstract

The purpose of this thesis was to examine the effects of time delay on a visual servoing system. We analyse the effect of time delays on a linear model of a robotic arm joint, as well as an ABB irb2400 robotic arm with 6 degrees of freedom. This master thesis is consequently divided into two separate parts. The first concern of the project is to deal with the design of the visual servoing system. In this part we explain how the image processing works and which are the things we have to pay intention. We speak also about the design of the process (controller, dynamic of the robot, etc). This is the preliminaries for experiments but also important to have a good simulation environment, with a model as close as possible of a true robot system. In the second part we analyse the effects of time delay on the system. We start with the simple model of one joint to have a base to work and after we repeat the analysis using the full model of the robot to see if the behaviour is the same. The real time behaviour is simulated using the TrueTime, which simulates a real time kernel with complex models of the timing of the controller. All this experiments have been simulated for an easier installation of the experiment protocol (no hardware problems). Another reason is that we had a robot simulator already implemented.

Keywords

Visual servoing, time delay, stability analysis, real time control

Classification system and/or index terms (if any)

Supplementary bibliographical information ISSN and key title

ISBN

0280-5316 Language

Number of pages

English

56

Recipient’s notes

Security classification The report may be ordered from the Department of Automatic Control or borrowed through:University Library, Box 3, SE-221 00 Lund, Sweden Fax +46 46 222 42 43

Contents Acknowledgments Introduction Robotic and cameras system Time Delay Thesis Outline Background Digital Camera Model Camera Image based Control Method Problem Formulation Tools Model and Implementation Preliminary Experiments Model: one degree of freedom controller closed loop system Experimental Setup Results Application on an industrial robot Parameterization Experiments Results Conclusion Reference Appendix

3

Acknowledgments

Being an MSc student in the Blaise Pascal University of science and in third year in my engineer school: L’Institut Français de Mecanique Avancée, France, I was given the chance to carry out my masters thesis in the Department of Automatic Control in the University of Lund, Sweden. I would like to thank Professor Rolf Johansson from the Department of Automatic Control in the University of Lund and Mr Nicolas Andreff from the IFMA for giving me such an opportunity. The experience has exceed my expectations. Through the course of this project I was involved in a work procedure which had as a purpose the combination of different fields of research, like vision, robotics, and time delay to achieve an ultimate goal, explained in this report. For thus I have principally worked with Tomas Olsson, current PhD students in the University of Lund, at the Department Of Automatic Control. I would like to express to him my sincere gratitude for his help and his co−operation. I would like to thank Professor Rolf Johansson who has guided me through the whole course of this project and whose ideas made me overcome even the hardest problems I have encountered. I also appreciate the support that Leif Andersson and Dan Heriksson have given me to use programs useful for my work and for that I would like to thank them. Finally I would like to thank all the other people of the department with who I have passed six very pleasant months whereas the summer weather was bad and learn the rules of Inne Bandy which is an exhausting but interesting sport.

4

1 Introduction Visual servoing and time delay in the robotic domain are both problems which have been studied for many years. Indeed, the visual servo in robotics is useful in many applications, such as industrial robotic positioning or distant surgery. And the time delay is one of the biggest problems to solve to have an efficient robot. We will now attempt to give an introduction to both domains: visual servoing on robotic and time delays.

1.1 Robotic and Visual servo The first use of industrial robots along with computer aided manufacturing systems is dated from 1960’s. Since the research area of robotic has progressed so fast, today robots are often used in industry. Indeed the automotive industry has made large investments on this research domain and therefore has played a crucial part in its development. From then until now developments have made robot capable of making more and more tasks and therefore useful in more application than before. This is a consequence of developments in the area of computing machine which allow robots to increase its performance as well as range of abilities. However, until now industry use robots with restricted sensor feedback, which limit their field of application. Indeed if we visited automotive factories, we should see that for each task a specific robot is used with it own program and set of sensors adapted to this task, which is not easy to change if we want it make another task, because this set of sensors are most of the time internal sensors such as incremental encoders in joints. Indeed robot control techniques traditionally use world and joint coordinate systems to determine the position of the robot and the desired positions and trajectories. This seems to work satisfactorily for static environments met in industrial applications. So the robot performs tasks in a well known environment. Problems appear in the control when such robotic manipulators are used in dynamic environment. That is why, the industry begins today to be interested in more adaptable robots and the computer vision is an efficient sensor for such robot. Indeed with visual information we can have a good idea of aspect of robot environment at any moment and so react consequently. That is why a large part of robotic research is devoted to this field of application, would be this only for humanoid robot applications. But is could be also useful in industry to realize the control of robotic arms, tracking of object etc.

5

In our project we will focus only on the robotic manipulator (robotic arm) and the task of following a line drawn on a board. To do this we will use visual feedback from a vision system consisting of a camera mounted on the end-effector. In such application the control need a feedback to correct the position of the robotic arm dynamically, because a lot of external factor can modify this position or make it obsolete. So in a visual servo control this feedback is performed by a digital camera, which implies a complex treatment of images to extract the needed information. Indeed if we made a comparison with the human visual system, the camera is only the eye. So we need a “brain” too to interpret the signal send by the camera. This part of visual sevoing is the subject of numerous publications (for instance: [1]) about different process to detect image features: point, line, object, gradient, etc… Therefore we will speak about the treatment used in our case. Such process being closely linked to the type of information needed: detecting a line is not the same than finding points or other.

1.2

Time Delay

The problem of time delays has always been important in control. The problem of time delay is also closely linked to robotic problem especially when the robot is used in a control loop. Indeed, in such case the control loop need sensors to measure the effect of the control. On the process, time delay appears because of many factors like communication time, processing, etc. The more a system needs real time process, the more his efficiency is linked to time delay. We distinguish two types of real time control: hard and soft. The soft real time process requires less precision than others. Indeed such process, which are currently the more used in industry are little sensitive to time delay and can work even if the process is not really periodic or the control is based on obsolete information. On the other hand the hard real time systems are highly sensitive to any problem in the control loop. That is why such systems require a detailed attention about perturbations and especially about time delay which is inevitable. We can control how the system works in presence of a known time delay. This can be useful to determine the behavior of the system. In this thesis we will see how our system of robotic arm works under the influence of time delay. For more information about control systems with time delay, see [2], [15] and [3].

6

1.3

Thesis Outline This master thesis report is organized as follows:

In the next chapter we will introduce pre-requirements in visual servoing to understand the rest of the thesis. We will explain how we design a camera, a visual servo system and we describe the use of the image jacobian. The third chapter deals with the formulation of the problem and how its proposed solution. We will see all the problems which are related to the subject. Then I will explain how I have designed the simulation model to make the experiments. In the fourth part we find all which is related to the experiments with one joint. We will see what happens when we have time delays in the system and when we add some noise. The fifth part is quite similar to the previous as it is the results of experiments with the full model. This is to see if the behavior of the system is the same than with one joint. In the last part we deal with the results of the thesis and the future work.

7

2 Background 2.1

Digital Camera

There are two types of digital cameras frequently used in computer vision. These who send intensity images (encoding light intensity) and these sending range images (encoding shape and distance) acquired by sonar or lasers scanners. Both have the same structure, as seen below. They are constituted by optics, which focuses the light on a photosensitive device (a CCD array) and a frame grabber.

FRAME GRABBER

Output

Optics

CCD Array Figure 2.1: Essentials components of a digital image acquisition system

2.1.2 CCD Array In a CCD (Charged Coupled Device) camera, the physical image plane is a CCD array which is a rectangular grid of photo sensors. . For more information about digital camera, see [1]. Here we used an intensity gray-level camera. That means the sensor is sensitive to light intensity. So each photo sensor stores an energy level corresponding to the amount of light impinging on it. The Output of the CCD array is usually a continuous electric signal, which we can regard as generated by scanning the photo sensors in the CCD array in a given order (line by line) and reading out their voltage.

8

2.1.3 Frame Grabber This signal is sent to the frame grabber who digitalizes this signal and translates the amount of energy receive of each photo sensor on an integer in the range [0, 255], typically 0 is black and 255 is white. After this the frame grabber stores the digitalized signal in a memory buffer. The frequency of the camera determines the refresh rate of this buffer. So digital cameras send matrix of integer corresponding to the scene viewed (figure 2.2 illustrate this). Each entries of this matrix are called pixel (an acronym for picture elements). This matrix is arranged in 2 dimension u and v, where u goes from top to bottom and v goes from link to right. So the pixel (1, 1) is the top link pixel. Pixel(X=205,Y=92)

Pixel(X=209,Y=96)

Figure 2.2: Digital Image and corresponding 2D array of numbers

2.2

Camera Model

The aim of such models is to link the position of scene points with that of their corresponding points in images. Indeed a camera transforms a point in 3D space on a point in 2D space by a geometric projection, so we need to know this transformation to work with information sends by camera. Many models exist that are more or less complex and consequently more or less realistic. For more information about others camera models see [4].

9

Figure 2.3: The pinhole camera model

In our case we use the perspective or pinhole model, which is the most common geometric model of an intensity camera. The perspective camera model consists of a point O, called the centre of projection, and a plane ∏, the image plane (which is a CCD array in digital camera). The origin of the camera centred coordinate system is in O, see Figure 2.3. The distance between O and ∏ is the focal length f. The line perpendicular to ∏ that goes through O is the optical axis, and the intersection of this line with ∏ is called the principal point o (which is the most often the centre of the CCD device). The projection equations for a point [X Y Z] in Cartesian space into coordinate [x y] in the perspective camera are given by:

x= f ⋅

X Z

(2.1)

y= f⋅

Y Z

(2.2)

10

This can be written using homogeneous coordinates as:

⎛ x⎞ ⎛ f ⎜ ⎟ ⎜ λ⎜ y ⎟ = ⎜ 0 ⎜1⎟ ⎜ 0 ⎝ ⎠ ⎝

0 f 0

⎛X⎞ 0 0 ⎞⎜ ⎟ ⎟⎜ Y ⎟ 0 0 ⎟⎜ ⎟ Z 1 0 ⎟⎠⎜⎜ ⎟⎟ ⎝1⎠

(2.3)

Where λ = Z is the depth of the imaged point in the camera. In This thesis we assume this model of a camera transformation is enough realistic. Indeed in real camera there are some other problems (such as rectangular pixel, non orthogonal array…) that we do not model here and this make the model more complex. So we obtain the normalized perspective projection system below:

⎛u⎞ 1 ⎛ X ⎞ ⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎝v⎠ Z ⎝Y ⎠ 2.3

(2.4)

Image Based Visual Servo

Figure 2.4: Image based visual Servo

2.3.1 The Control Law In an image based control servo, see fig. 2.4, the control is defined directly in image space quantities. That is why we define the control error as:

11

e = yr − y

(2.5)

Where yr and y are vectors in image space coordinates. y r is the desired (final) position of the end-effector in the image and y is the measured current position. So a simple control law that would drive the error e to zero is:

y& = K ⋅ e

(2.6)

In which K is a constant gain. The output y& of this controller is an image space velocity vector, containing the desired velocity of the end-effector in the image. But the control signal sent to the robot is defined in task space. In this space the velocity screw is define as a 6-vector of translations and angular velocities. Therefore it is necessary to relate differential changes in the image feature parameters to differential changes in the position of the end effector. That is why we introduce the image jacobian J v .

2.3.2 The Image Jacobian So let r , which is a vector, represent coordinates of the end-effector in some parameterization of the task space, and r& represent the corresponding end-effector velocity. Let y represent a vector of image feature parameters and y& the corresponding vector of images feature parameter rate of change. If the image feature parameters are point coordinates these rates are image plane point velocities. This image jacobian is a linear transformation that maps end-effector velocity to image feature velocity:

y& = J v ( r ) ⋅ r&

(2.7)

Where J v is a matrix of partial derivation like below:

∂f1 ( r ) ⎞ ⎛ ∂f1 ( r ) K ⎜ ⎟ ∂r1 ∂rm ⎟ ⎜ ⎡ ∂f ⎤ O M ⎟ J v (r) = ⎢ ⎥ = ⎜ M ⎣ ∂r ⎦ ⎜ ∂f k ( r ) ∂f k ( r ) ⎟ L ⎜ ∂r ∂rm ⎟ 1 ⎝ ⎠

(2.8)

The equation (2.7) can be solved using the least square method assuming that J v (r ) is of full rank. This gives the velocity screw r& that will minimize J ( r ) − y& 2 . v The velocity screw is usually expressed in the camera coordinate system. In order to generate the correct trajectories for the robot we also need to transform the velocity

12

screws to the robot base coordinate system. Therefore we need the estimations of Cartesian camera robot transformations, obtained from the calibration of the system. This jacobian image is also called interaction matrix.

2.3.3 The image jacobian of point The most common image jacobian is based on the motion of points in the image. So suppose that this end-effector is moving with angular velocity Ω(t) and translational velocity T as below:

⎛ Tx ⎞ ⎜ ⎟ ⎜ Ty ⎟ ⎜T ⎟ r& = ⎜ z ⎟ ⎜ ωx ⎟ ⎜ω ⎟ ⎜ y⎟ ⎜ω ⎟ ⎝ z⎠

(2.9)

Let p be a point rigidly attached to the end-effector. The velocity of the point p, expressed relative to the camera frame, is given by:

p& = Ω ⋅ p + T

(2.10)

T To simplify notation, let p=[X, Y, Z] , so the time derivative of the coordinates of a point on the end-effector, expressed in the camera frame can be written as:

.

X = Z ω y -Y ω z + T x .

Y = X ω z - Z ω x +T y .

Z =Y ω x - X ω z + T z

(2.11) (2.12) (2.13)

Differentiating the projection equations (2.4) with respect to time and using these expressions we get:

(2.14)

13

(2.15)

So we can write this system on matrix form:

(2.16)

So the first matrix is the jacobian image of a point in image frame. If the features are a group of point or others object, we can stack each jacobian to build the jacobian of all features.

14

3 Method The main goal of this part is to determine how we will proceed to experiment on the subject of our thesis. Therefore we will see first the problem formulation, to see what subject implicate. Just after, we will speak about materials and tools used to make experiments. Finally I will see you how I have proceed to prepare my experiments.

3.1

Problem Formulation

3.1.1 Time Delay Real-time control systems are inevitably affected from the time delays occurred. Therefore when we take into account time delays, we can represent real-time control systems by a closed loop system like the one shown in the figure 3.1 below, according to this publication [2] about real time control with delays.

Figure 3.1: Time delays in system

With such representation we can easily see that the different time delays occurred are the following: -The communication delay between the sensor and the controller -The computational delay within the controller -The communication delay between the controller and the actuator

15

In a visual servoing system where the sensor is a digital camera, we can easily identify the first of these delays between sensor and controller as the time it takes for a captured image from camera to become available for processing. Whereas the computational delay is the consequence of time needed to perform the algorithm of image processing. The last time delay can be identified as the time taken by the computer to send the controller output to the robot. So the control delay of the system, the time from when a measurement signal (image here) is sampled to when it is used in the actuator (the robot), is equal to the sum of these three delays. The problem is this delay is not constant; he is varying in a random fashion. Indeed the communication time depend on the quantity of data to send and on the type of network used too. Typically Ethernet is non deterministic, so we can not be sure of the moment when the data is usable. And even on a clock synchronized network, the length of sent data modify the delay. Thus we will make experience to see the influence of time delay on the control. Therefore we will see the link between the time delays and the robustness of our system. We should experiment several representation of delay.

3.1.2 Visual servo A simple representation of a visual servo control loop is this shown by the following figure (fig. 3.2). This is a closed control loop, where the sensor is a digital camera. noise input

C(s)

G0(s)

Sensor noise

Figure 3.2: visual controller structure

In this transfer loop, the bloc G0 is the transfer function of the process. In other words this is an equation which represents the behaviour of the robot when this one is excited by a command. The bloc C is the transfer function of the controller. But if we want the simulation more realistic we can modify this closed loop to add noise in the data send by the sensor or the controller.

16

Indeed in real system there are always some noises that disturb the signals. These noises are the consequence of many factors like: -error in calibration of the camera -precision of the camera -leak of precision in the image processing -friction in the joint of the robot So in our experiment we have to represent these noises, if we want the simulation is the more close to the real system. Thus we will start our experiments by the setup of one joint control loop. After these first experiments we will make simulation with a full model of the robot and we will see if the results are these expected.

3.1.3 Image processing As we have seen above a digital camera need a treatment of the image to extract information from the data. These data are only value computed in a matrix. So we need to interpret the arrangement of these data in the matrix. Thus we can know where are the seeking features (points, line, etc…) in the image. Therefore we need to make an algorithm using this value to determine if there are features in image. This algorithm is depending on which feature we used and what is the task made by the robot. So we need to create a new algorithm to each application of a robot. The problem of such algorithm is they spend time. Consequently the global time delay is depending of this algorithm. That is why I have paid attention particularly to the velocity of this algorithm. The algorithm has to be enough fast to allow the system works correctly.

3.2

Tools

3.2.1 Matlab/Simulink To design and make simulation of the real system I have used as main tool The Mathworks Solution called Matlab/Simulink. This program allow to program functions with a really simple interface and intuitive programming language. With Simulink we have the possibility to create many kind of simulation system and run them. Indeed this module of Matlab proposes a graphical environment in which we build the system with block sorted by their properties.

17

3.2.2 Truetime Truetime is a Matlab/Simulink-based toolbox which facilitates simulation of the time delays and network characteristics. The last version of this toolbox has been developed in the Lund Department of Automatic Control by Dan Henriksson and Anton Cervin in 2003, on a base of an early version developed in 1998. For a detailed description of how to use the simulator, see [5]. This simulator is constituted by a library of two simulink block as we can see on the figure 3.4.

Figure 3.3: The Truetime Block Library (The display and Monitor outputs display the allocation of common resources (CPU, monitors, network) during the simulation. )

This both blocks allow building easily simulink models containing time delays. Indeed the first, the TrueTime kernel block executes user define tasks and interrupt handlers representing: I/O task, control algorithm and network interface. This block takes into account user defined execution time of task and transmission time of messages. Furthermore TrueTime allows simulation of context switching and task synchronization using events. Thus it allows both periodic and event started task. All the inputs and outputs are assumed to be discrete time signals except the signals connected to the A/ D converters which are the data used by the user-defined task. The display and Monitor outputs display the allocation of common resources (CPU, monitors, and network) during the simulation. In fact the kernel block simulates a computer with a real time kernel, A/D and D/A converters, a network interface and external interrupt channels. So this kernel maintains the characteristics who define a real-time system such as interrupt handlers, monitors, timers and events so called jobs. For definitions of functions used in this kernel see the user manual [6]. The second block allows simulating a local area network with all his characteristics. Thus Six models of network are currently supported: CSMA/CD

18

(Ethernet), CSMA/AMP (CAN), Token Bus, FDMA, TDMA, and Switched Ethernet. In this simulation block, the propagation delay is ignored, since we are in a local area network. This block is event-driven and executes when messages send by the Kernel block enter or leave the network. When a node tries to transmit a message, a triggering signal is sent to the network block. When the simulated transmission of the message is finished, the network block sends a new triggering signal on the output channel. These messages contain data transmitted by the network between each node.

3.2.3 Robot

Figure 3.4: Robot Irb 2400 with 6 degree of freedom

In Lund department of Automatic Control there are two robots. The one used for my experiments is the robot Irb 2400 which is an industrial robotic arm of ABB. This robot is coupled with a new grey level camera with high refresh frame rate (100 Hz). This is the digital camera A 602f of Basler. The Camera is connected on a computer by a Firewire link (IEEE 1394 2000) which is a high performance serial bus. The picture format is 320 x 240 pixels. We assume the camera is hold by the end-effector of the robot. For further information see the reference [7] and [8].

3.2.4 Winrobot Winrobot is a graphic simulator of robot and his environment. We can see an example of environment with the figure 3.5. This program has been developed here in the Lund Department of Automatic Control by Tomas Olsson. With such program the simulation of my system has been greatly simplified. Indeed this is a program write in C++ which can be used under Matlab environment thanks to the functions wrote in both syntaxes. So these functions allow all we need for a robotic simulation: - Definition of robot - Definition of object

19

- Moving the robot (in his joints frame) - Placement of object and robot in the space - Definition and positioning of camera - Attaching camera to the robot or object This program is based on OpenGL library which allows creating a 3D environment and creating object on it from files. You can See below the environment of the simulator.

Figure 3.5: Winrobot environment example with two cameras. The first, on the left is attached to the end effector. The second is fixed in the space to see the environment.

20

3.3

Model and Design of the robot

Once the tools chosen I have created my simulation system. This has taken time because of my inexperience in matter of image processing and time delay. I had to read several publications to understand what do these both field of study mean. Thereafter I could start the implementation of my system, which I will explain in this chapter.

3.3.1 Image processing As I said before the image processing is an algorithm which depends closely on the feature searched in the image and the task we want the robot is able to do. For recall the task defined in the problem formulation is the following: the robot has to follow a line drawn on a board. Here when we said robot, we want the end effector follows the line as accurately as possible. First of all we have to know what the nature of the sought feature is and what his characteristic in an image are. So to follow a line we have to know where the robot is in comparison with the target line. It is the same problem to find the line position in comparison with the end effector. Therefore, I have decided to find the closest point of the line to the center of the camera, which is assumed as the reference of the end effector. This problem which is really simple for a human being is much harder for a computer. Indeed for a computer each pixel of an image are the same (only their values are different) he can not associates a pixel to a feature working without his neighborhood. Thus when a tried my first algorithm, it was a disaster because I had not taken into account this fact. I had assume a pixel as member of the line only by his value and defining two threshold limit to avoid the pixels who did not have the same “color” than the line. A(x,y) = 1 if =0 else

2525&matrix(i,j)

Suggest Documents