Jaimin Robotics System. Capstone Interim Maureen Lim-Chua Robotics and Computer Vision Professor Kristin Dana

Jaimin Robotics System Capstone Interim Maureen Lim-Chua Robotics and Computer Vision Professor Kristin Dana 5/4/2011 1 Abstract Robotics and Comp...
Author: Polly Hunter
4 downloads 1 Views 4MB Size
Jaimin Robotics System Capstone Interim Maureen Lim-Chua Robotics and Computer Vision Professor Kristin Dana 5/4/2011

1

Abstract

Robotics and Computer Vision is a relatively young field with seemingly limitless potential, especially as technology continues to improve. Developing practical and reliable applications from the basic principles of Robotics and Computer Vision still remains a great challenge. Tracking and motion capture, particularly when incorporated with augmented reality, are among the more difficult to properly actualize. This project seeks to integrate vision based controls for a LEGO robot, in which the robot will coexist with an augmented object.

Contents 1 Abstract

1

2 Objective 2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Augmented Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 2 2 2

3 Procedure 3.1 Overview . . . . . . . . . . 3.2 Checkerboard Detection . 3.3 Finding the Black Marker 3.4 Tracking the Checkerboard

. . . .

3 3 4 5 7

. . . . . .

8 8 9 10 11 12 13

. . . . . .

14 14 15 15 16 16 17

4 Experimental Results 4.1 Forward Movement 4.2 Reverse Movement 4.3 Left Turning . . . . 4.4 Right Turning . . . 4.5 Stopped . . . . . . 4.6 Augmented Reality

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . . for

. . . . . .

. . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Augmented Reality .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . .

. . . . . .

5 Error Analysis 5.1 Overall Speed, and Individual Motor Speeds . . . . . 5.2 Instances of ”Swapping” Degrees along the Horizontal 5.3 Bluetooth Communication Issues/Lag . . . . . . . . . 5.4 Program Crashing Unexpectedly . . . . . . . . . . . . 5.5 Augmented Reality Flaws . . . . . . . . . . . . . . . 5.6 Augmented Reality Software Incompatibility . . . . . 6 Current Trends in Robotics

. . . .

. . . . . .

. . . . . .

. . . .

. . . . . .

. . . . . .

. . . .

. . . . . .

. . . . . .

. . . .

. . . . . .

. . . . . .

. . . .

. . . . . .

. . . . . .

. . . .

. . . . . .

. . . . . .

. . . .

. . . . . .

. . . . . .

. . . .

. . . . . .

. . . . . .

. . . .

. . . . . .

. . . . . .

. . . .

. . . . . .

. . . . . .

18

1

2 2.1

Objective Overview

This project integrates systems of both vision based recognition and augmented reality. Computer Vision, as well as knowledge of Augmentation, is integral to achieving these goals.

2.2

Computer Vision

Computer Vision theories and concepts are modeled after biological vision. Mimicking organic functions, such as human eyesight, proves to be quite a challenging task. The human eye is able to recognize and trace certain objects, despite various different and simultaneous transformations upon the objects. Being that Machine Vision is based on 2D images, and that each image is simply composed of a matrix of varying pixel values, subtle changes in lighting, background, and positioning of the object pose serious problems in recognition.

2.3

Augmented Reality

The realm of Augmented Reality adds further complexity. Real world data is combined with computer generated additions, to alter or enhance the users perception of reality. The most widely researched form of augmented reality is that by which real time video input is machine handled and then augmented with computer graphics. Augmented Reality in this form is heavily reliant upon the stability and accuracy of Computer Vision data.

2

3 3.1

Procedure Overview

To achieve the aforementioned goals of augmented reality and gesture controls, the project had to be divided into two distinct parts. A vision based system was developed first, followed by the addition of an augmented object. The movement of the robot corresponds to the rotations and placement of a checkerboard patterned paper with a black marker. These transformations of the checkerboard paper are interpreted by camera, processed and then sent as instructions via Bluetooth to the LEGO bot. A two dimensional augmented object, or image, is superimposed on a separate checkerboard and follows the transformations and movement of this checkerboard. Using basic OpenCV functions, and a uniquely designed algorithm to extrapolate data from the motions of the checkerboard paper control, a usable remote controlled system is achieved. Additionally, with the use of multithreading, an augmented object is also integrated into the overall program.

Figure 1: A picture of the LEGO NXT robot controlled remotely by the vision based system.

3

3.2

Checkerboard Detection

The chessboard detection function included in OpenCV was essential for the control of the robot. This function reports the exact position of the vertices of the chessboard on the image. With this information, the rotation of the chessboard can be determined. The rotation value is used to determine the speed of the motors for turning. The left and right motor values are individually incremented, decremented, or both remain the same, depending upon the angle passed. For example, if the user tilts the checkerboard paper to the left, this would increment the right motor and decrement the left motor, resulting in a left turn. If the orientation of the black marker is at 90 degrees, then the robots motors are set at equal speeds, resulting in a straight forward motion . Moving the center of the chessboard above or below the center of the image will cause the robot to move forward or backward respectively. The height of the chessboard center is used to determine the forward or backward speed of the robot. The overall speed of the robot is varied by moving the paper upward or downward. A variable, called speedsense, sets a certain margin along the middle of the window such that if the checkerboard paper is recognized to have dipped below the lower margin, the robots speed is increased in the reverse direction, and if the checkerboard paper is detected above the upper margin, the robot is sped up in the forward direction. Determining the absolute rotation of the chessboard has one minor challenge: the rotation cannot be determined without the use of some form of marker. The chessboard detection algorithm locates the pattern and numbers the vertices from right to left. Since the chessboard pattern is symmetric, if it were to be rotated 90, 180, or 270 degrees from its current position, the board would be numbered exactly the same. The discovery of this feature is made apparent in Figure 2.

4

Figure 2: The picture above shows the functionality of the chessboard detection algorithm in OpenCV. One can easily see that the vertices of the checkerboard pattern are numbered vertically, from top to bottom, starting from right to left. Because the board is symmetric, the top to bottom numbering style of the board remains the same even when the board is rotated. Thus, the necessity of the black marker to determine rotation becomes apparent during this trial.

3.3

Finding the Black Marker

To find the absolute rotation of the chessboard, a marker must be placed in a known position on the board. This is why the black marker is so significant. It is very important to note that this known marker is placed outside the bounds of the chessboard. The placement is intentional, as to not hinder the operation of the built in chessboard detection function in OpenCV. In order to detect the presence of the marker along the horizontal axis, the center vertices are collected. The difference between the center vertices is computed and averaged to acquire a more accurate distance value. This distance value is necessary for computing the areas of interest when searching for the marker along the horizontal axis. If the distance between any two adjacent vertices is X, then the distance between the outer detected vertex and the expected marker position is 2*X. This process is essentially repeated to detect the possible presence of the marker along the vertical axis. 5

Once the absolute position of the marker is known, we can compute the absolute angle of the chessboard. The relative angle is computed with a simple call to sin(). The absolute angle is then altered depending on the detected position of the marker. If the marker is detected in the first, second, third, or forth quadrant, then relative angle is adjusted by 0, 90, 180, or 270 respectively, to yield the absolute angle.

Figure 3: The image above illustrates the process of detecting the marker. Using an iterative method, only the major axes of the checkerboard are displayed, as denoted by the blue and green dots. The algorithm repetitively polls four outer points along the major axes, as shown by the red, white, green and blue circles outside the checkerboard. The location and position of the black marker is then found.

6

3.4

Tracking the Checkerboard for Augmented Reality

To accomplish the task of creating a 2D augmented image, similar functions to the vision based movement system are employed. The checkerboard pattern is once again used, acting now as a marker. After detecting the checkerboard corners, uncalibrated stereo images are taken and a calibrated stereo system corresponding to the images create a depth map. Using the pipeline technique taught in class and described in detail in Trucco and Verris Introductory Techniques for 3D Computer Vision, the steps are as follows. First, the correspondence is found. The fundamental matrix, F, is then determined using SVD. This fundamental matrix F is used to rectify the images, and can also be used to find the camera parameters. A rectified image pair is sent into the calibrated stereo correspondence implementation built into OpenCV. Based on the algorithm developed by Birchfield and Tomasi, OpenCV uses dynamic programming to locate the correspondences, using epipolar geometry to create a depth map. If the depth map is undesirable or inaccurate, new correspondence points are found by re-inspecting the area near the original epipolar lines. This process is repeated for new correspondence points.

Figure 4: The image above shows the augmented 2D image rendering at work. The checkerboard paper is detected and the image is shown along the inner four vertices along the horizontal and vertical. Using the pipeline, the 2D augmented object roughly corresponds to the movements of this checkerboard.

7

4 4.1

Experimental Results Forward Movement

Figure 5: This is the output of the forward movement of the robot. As one can see inside the red box outlined above, the angle of the black marker is registered as 90 degrees. The pink arrow draws attention to the cameras perspective of the black marker in comparison to the recorded angle. Since the checkerboard paper is shifted only slightly toward the top of the window, one can observe that the speed is a small value of 8.02, out of the full speed of 100. Underneath the value of speed one can see lw, the left motor, and rw, the right motor, are set to the same speed value. This results in a slow forward motion.

8

4.2

Reverse Movement

Figure 6: This is the output of the reverse movement of the robot. As one can see inside the red box outlined above, the angle of the black marker is registered as 90 degrees, like Figure 4. The pink arrow draws attention to the cameras perspective of the black marker in comparison to the recorded angle. Since the checkerboard paper is shifted significantly toward the bottom of the window, one can observe that the speed is relatively high, a value 72.5 out of the full speed of 100. The negative speed value is simply an indicator to the user that the motors being set in reverse. Underneath the value of speed one can see lw, the left motor, and rw, the right motor, are set to the same speed value. This results in a very fast backward motion.

9

4.3

Left Turning

Figure 7: This is the output of the left turn movement of the robot. It is extremely important to remember that the checkerboard position in the image is in the perspective of the camera view. The movement of the black marker, in the point of view of the camera, is in the opposite of the movement of the actual paper. So although the pink arrow draws attention to the black marker being on the right, the user perceives the positioning the marker to be towards their left. As one can see inside the red box outlined above, the angle of the black marker is registered as approximately 175 degrees. This is a fairly accurate approximation, as the black marker is fairly close to the 180 degree point from the perspective of the user. Since the checkerboard paper is shifted toward the bottom of the window, one can observe that the speed is set to a negative value of 18.86, out of the full speed of 100. Underneath the value of speed, one can see lw, the left motor, and rw, the right motor, are set to differing values. The left motor is the more dominant in the reverse direction, with the right motor moving in the forward direction, resulting in a fairly fast reverse-based left turn. The output also shows that the motion is detected as left.

10

4.4

Right Turning

Figure 8: This is the output of the right turn movement of the robot. It is extremely important to remember that the checkerboard position in the image is in the perspective of the camera view. The movement of the black marker, in the point of view of the camera, is in the opposite of the movement of the actual paper. In this case, the pink arrow draws attention to the black marker being on the left, while the user perceives the positioning the marker to be towards their right. As one can see inside the red box outlined above, the angle of the black marker is registered as approximately 42 degrees. This is a fairly accurate approximation, as the black marker is fairly close to the 45 degree point from the perspective of the user. The checkerboard paper is technically centered within the window, which explains the speed value of zero. One can see lw, the left motor, and rw, the right motor, are set to the same speed value of 23.83, but in opposing directions. The left motor moves in the forward direction, and the right motor moves in the reverse direction, resulting in a slow, forward-based right turn. The output also shows that the motion is detected as right. The discrepancy between the overall speed and the individual motor speeds demonstrates that in regards to turning, the robots motor speed is more dependent upon the degree value of the angle. Had the position of the checkerboard been higher or lower than the threshold speed margin, the motor speeds would have been further affected.

11

4.5

Stopped

Figure 9: This is the output of the robot at complete rest, or stopped. As one can see inside the red box outlined above, the angle of the black marker is registered as quite close to 90 degrees. The pink arrow draws attention to the cameras perspective of the black marker in comparison to the recorded angle. Since the checkerboard paper is roughly centered in the middle of the window, one can observe that the speed is recorded as zero. Underneath the value of speed one can see lw, the left motor, and rw, the right motor, are both set to zero, since there is virtually no rotation partiality toward the right or left. This results in no movement.

12

4.6

Augmented Reality

Figure 10: These are sample runs of the augmented 2D image on an actual checkerboard paper. These images show varying perspectives of the checkerboard paper and how the 2D image is altered according to the orientations. In the first image, the test run had a flipping issue, discussed in further detail in the error analysis section of this paper. The second image is the 2D image accurately aligned with the inner four vertices along the horizontal and vertical while close up to the camera. The third is the augmented image at a further distance and at an angle. One may notice the further distanced augmented image is not quite aligned with the interior four vertices; this implies that the distance of the marker to the camera greately affects the rendering.

13

5 5.1

Error Analysis Overall Speed, and Individual Motor Speeds

Figure 11: This figure primarily focuses on the differences between overall speed and the speeds of each individual motor with respect to varied angle values. Generally, one should note that when the red trend line representing the left wheel motor speed is greater than the green trend line representing the right wheel motor speed, the left wheel is then more dominant resulting in a right turn. When the green trend line is above the red trend line, the right wheel motor speed is more dominant, resulting in a left turn. Negative speed values represent the motors in reverse. For situations where the ”negative” motor value speed is greater, the result is the expected turn in reverse. Thus, if the left motor speed was set at -100, and the right motor speed at 0, this would result in a left reverse based turn. Looking at the graph above, one can see that the left wheel motor speed is more dominant for angles less than 90, which correctly results in a right turn. One can also see that for angles greater than 90, the right wheel motor speed is the higher valued, creating an expected left turn. It is interesting that the overall speed, represented by the blue trend line, appears to consistently be a median or average between the more dominant motor and the less dominant motor. For significantly small angles (angles very close to zero), the overall speed is still somewhat a ”median”; one must remember that the ”negative” valued speeds are actually the motor speeds set in reverse (i.e., if the left wheel is at -100 and the right wheel speed is 0, then the overall speed is expectedly around 50). The three speeds are equal at 90 degrees. There are two observations that indicate some inconsistencies. Though the maximum motor speed values in either direction (forward or reverse) are 100, for some of the angle 14

values greater than 90 degrees, the right motor is set to a value higher than the maximum. Secondly, it is strange that for the smaller angle values, although the overall speed is ”positive”, the program favors a reverse based turn. Reverse based turns should only occur when the program detects a negative overall speed value at a specified angle.

5.2

Instances of ”Swapping” Degrees along the Horizontal

In terms of the bluetooth remote control, when the black marker is aligned with the horizon, at 0 or 180 degrees, a ”swapping” problem sometimes occurs. The program confuses the positioning of the black marker with the opposite angle along the horizon. If the marker was aligned at 0 degrees, the user attempting to make a solid right turn, the program may confuse 0 degrees with 180 degrees, resulting in a left turn instead, and vice versa. Oftentimes when the switching happens, it occurs repeatedly, thus causing the robot to jerk between hard left and hard right turns.

5.3

Bluetooth Communication Issues/Lag

Initially, the bluetooth communication for the robot motor control was attempted with C++. This proved quite difficult in Microsoft Visual Studio, the programming platform selected. Objects called ”Forms” were necessary to manually set up the communication in this way in Visual Studio. The set up as well as the programming for the communication were very difficult and time consuming, and thus, a switch to the NXT++ programming library was made. This library included a predefined bluetooth call, and once properly set up, the bluetooth communication was definitely more reliable and easier to use. OpenCV and the NXT++ libraries are quite compatible with one another. There was initially a tremendous lag time in between the visual input processed and the actions of the robot. After several different attempts to streamline the program, the lag was finally remedied slightly by reducing the size of the OpenCV window to a mere 80 x 60, as to have significantly less pixels to compare and process. Multithreading through the use of ”Boost” libraries was implemented in order to integrate the functionality of the remote vision control with the augmented 2D image into one program. Ideally, the functions for the control and the augmented object should run simultaneously without problems. However, including multithreading appears to further deepen the existing lag issues. Thus, window resizing was also done for the augmented reality portion of the project as well to minimize on the delay.

15

Figure 12: The average time taken between each capture of a new frame was calculated for the individual programs of the remote and the augmented image. The third bar represents the average time between frames after integrating the remote and augmented functions. The augmented reality and remote applications both experience significant differences in the average frame rate, the augmented portion experiencing an increase of about 126 percent, and the remote portion, an increase of approximately 74 percent. One must note that the processing time of each frame is also influenced by external concurrent running applications.

5.4

Program Crashing Unexpectedly

The executed program has some issues with crashing. This does not include when the bluetooth communication with the robot is not found, in which case the program is expected not to run. The program may have memory leaks caused by redeclaring objects without properly removing them from memory, resulting in unexpected runtime errors.

5.5

Augmented Reality Flaws

At times, the 2D image on the checkerboard did not act in perfect accordance to the orientation of the checkerboard patterned marker. As one can easily observe in the figures below, there were instances of the image appearing to be ”floating” above the page, or simply not aligning correctly with the inner vertices. This may be caused by a implementation flaw within OpenCV with its detection and correspondence algorithm. This may also be due, in part, to lag or delay issues. The 2D image also tends to flip unexpectedly, and this is most likely due to the symmetry of the checkerboard pattern. This could potentially be remedied by using a rectangular pattern instead.

16

Figure 13: As one can observe in these figures, the 2D image is not perfectly aligned with the inner four vertices of the checkerboard, as was intended in the design. This could be caused by imperfections in the correspondence or detection algorithms.

5.6

Augmented Reality Software Incompatibility

Though the initial goal was to render a 3D interactive object, this was much more difficult than anticipated. To render any game style interactive object, strong knowledge regarding physics engines and game coding would be required, which could not be accomplished within the time constraints. With a desire to still implement an augmented object, several options were explored. The most viable choice, in terms of simplicity of use, appeared to be ARToolkit. Unfortunately, this software is highly outdated, causing a plethora of compatibility issues. The toolkit refused to run on the Windows 64-bit Operating System, even in 32-bit compatibility mode. After this discovery, another attempt was made to set up the toolkit in another operating system, Ubuntu 10.10. This version of Ubuntu was also much too updated for the ARToolkit. After countless tries to fix, patch, and hack ARToolkit under the advice of various forums, all failed to remedy the issues in full. The choice to discontinue the installment of ARToolkit was made. In lieu of the toolkit, through only the use of OpenCV, a 2D image projection onto a checkerboard was implemented.

17

6

Current Trends in Robotics

The field of Robotics and Computer Vision holds innumerable potential applications. Many impressive humanoid robots, such as Hondas ASIMO, have been built as testaments of the research and development in this area. However, the most promising trend at the moment appears to be Robotics and Computer Vision in the medical field. There are several robotic systems in development for specific medical tasks, and some are already being used in the hospital environment. The utilization of Robotics and Computer Vision in the medical field will lead to a new era of improvement in the efficiency and effective treatment of patients. Two of the most impressive working robots in the medical field are the DaVinci Robot developed by Intuitive Surgical and the robotic nurse prototype in development by Purdue. These robots are extremely well suited for their specific line of work. The DaVinci surgical robot has the unique ability to make small incisions and perform complex surgical procedures under the direction of human surgeons. Any surgical operation is far too difficult for artificial intelligence at this time to have the robot completely reliant upon an AI system that can perform the task on its own (not to mention the consequences could be fatal). But the mechanical aspects of the robot are incredible. A doctor by the name of James Porter was able to remotely control the robot to fold a paper airplane the size of a penny. Its accuracy is absolutely necessary, but it is truly amazing that robotics has progressed to this point. The nurse robot, in development at Purdue, incorporates both computer vision and robotics in its implementation. The robot was created to handle repetitive tasks that would relieve nurses of monotonous tasks and eliminate the need for a scrub technician. Its two main functions are to pass the required instruments to the surgeon and monitor the number of instruments in use. The robot is programmed to recognize gesture cues and react accordingly. The gestures are reportedly understood with up to 95 percent accuracy. This robot nurse distinguishes between the number of fingers shown to it to give the appropriate instrument. The current camera is being replaced by a Microsoft Kinect camera to improve the robots depth perception. Like the DaVinci surgical robot, the Amadeus Robotic Surgical System also aspires for use in the operating room. Created by Titan Medical, the Beta console of this design has recently been released for testing to experts in the urology and cardiology. This robot, in addition to a highly enhanced vision system and accurate controls, also provides haptic or force feedback to its user, allowing the surgeon to essentially feel how much force is being applied remotely. Four arms allow for multiple approaches to a targeted site for better obstacle avoidance in the body. An unexpected use for computer vision and robotics can also be found in the diagnosis of mental disorders in young children. A research team in the University of Minnesota was given a total of 3 million dollars in grants to bring about robotic devices and com18

puter vision algorithms to help diagnose the early onset of mental disorders, such as autism, attention deficit disorder, and obsessive compulsive disorder. The head of the research team, Nikolaos Papanikolopoulos, is actualizing robotic instruments that will be able to discern key behavioral abnormalities in children. As most psychological experts are subjective and vary in specialty, if some are replaced by these robots, it may relieve some expenses. Robotics nanotechnology is a promising lead to non-invasive surgery. The Swiss Federal Institute of technology in Zurich has already created tiny robotic machines small enough to be injected into an eye without anesthetic. Though human subjects have not yet been tested on, the implications of this development is extraordinary. Medicine could be carried to exact locations in the body, and some minute operations could be performed without the need for suturing. TechNavio, a company who markets intelligence reports in different industries, projects that the Global Market for Robotics in Healthcare will reach 1 billion dollars by 2014. This market demand can be largely attributed to the development of Neuro Robotic system, capable of enabling movement for patients with paralysis. A research report done by Technavio announced that the reason for this incredible need is due to the rise of robotic surgery centers in industrializing nations. Among these nations, include Japan and India. The FDA has made restrictions on medical robots that are currently prevent further growth in this field. Despite this, the expectations of the robotics in healthcare industry are still high, especially with the introduction of pharmacist robots. Robotics and Computer Vision in use today has already had incredible benefits in cutting costs, improving efficiency, and greatly reducing the invasive nature of operations within the medical field. From performing surgeries to restoring differing physical functionality to the diabled, it is clear that this area of technology is not only exciting, but also profoundly imperative in the advancement of healthcare. If the progression of robotics research continues to rapidly improve in the medical field, the FDA will not be able to escape the need to reassess or remove the greater limitations put on these machines and instill new standards across the board. With all of the technological advancements in computer vision and mechanical robotics, it may be possible to soon have smaller scale operations performed by fully autonomous robots. The future is highly dependent, however, on the development of artificial intelligence, software, and perhaps more fluid and precise algorithms. In addition, the progression and acceptance of the robotics movement should be supported and funded by the government and private investors to ensure the continuing progress of this highly promising field.

19

References [1] Associated Press. 2011. Robotic Nurse in Development at Purdue. Fox News. [2] Aaron Saenz. 2011. Surgical Robot Folds Tiny Paper Plane And Lets It Fly. Singularity Hub. [3] Cecilia Galvin. 2011. Titan Medical Inc. Announces Results of Beta Console Early User Study. Robotics Trends. [4] Laura Wood. 2011. Research and Markets: Global Market for Robotics in Healthcare 2010-2014 - Emerging Nations are Driving the Demand for Robotics in Healthcare. Reuters. [5] Alexander Moschina. 2011. Tiny Robots: The Next Great Revolution in Eye Surgery. Wall Street Daily. [6] Rhonda Zurn. 2011. Researchers Studying The Use Of Robots And Computer Vision To Diagnose Mental Disorders In Children. Medical News Today.

20

Code #include "NXT++.h" #include #include #include //#include //#include #include #include using namespace std; #define PI 3.14159265 void Turn(double angle); Comm::NXTComm work; int speed = 50;

CvCapture* capture; void arThread() { // CvCapture* capture = cvCaptureFromCAM(1); cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, 80); cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, 60);

IplImage *image = 0; IplImage *frame = 0; IplImage *disp,*neg_img,*cpy_img; int key = 0; int fcount = 0; int option = 0; IplImage *pic = cvLoadImage("C:/Users/Maureen/Pictures/robotics.jpg"); if(!pic) { cerrwidth * 0; (float) pic->height * 0; (float) pic->width; (float) pic->height * 0;

q[2].x q[2].y q[3].x q[3].y

(float) (float) (float) (float)

= = = =

pic->width; pic->height; pic->width * 0; pic->height;

//Set of p[0].x = p[0].y = p[1].x = p[1].y =

destination points to calculate Perspective matrix corners[0].x; corners[0].y; corners[b_size.width - 1].x; corners[b_size.width - 1].y;

p[2].x p[2].y p[3].x p[3].y

corners[b_size.width*(b_size.height corners[b_size.width*(b_size.height corners[b_size.width*(b_size.height corners[b_size.width*(b_size.height

= = = =

-

1) + b_size.width - 1].x; 1) + b_size.width - 1].y; 1)].x; 1)].y;

//Calculate Perspective matrix cvGetPerspectiveTransform(q, p, warp_matrix); //Boolean juggle to obtain 2D-Augmentation cvZero(neg_img); cvZero(cpy_img); cvWarpPerspective(pic, neg_img, warp_matrix); cvWarpPerspective(blank, cpy_img, warp_matrix); cvNot(cpy_img,cpy_img); cvAnd(cpy_img,image,cpy_img); cvOr(cpy_img,neg_img,image); cvShowImage( "Video", image); } else { cvShowImage("Video", image); }

key = cvWaitKey(5); } } int main(int argc, char *argv[]) { capture = cvCaptureFromCAM(0); cv::VideoCapture capture(1); boost::thread ar(arThread); cv::namedWindow("Steering"); cv::Size window; capture.set(CV_CAP_PROP_FRAME_WIDTH, 80);//80 capture.set(CV_CAP_PROP_FRAME_HEIGHT, 60);//60 window.width = capture.get(CV_CAP_PROP_FRAME_WIDTH);

22

window.height = capture.get(CV_CAP_PROP_FRAME_HEIGHT);

srand(NULL); cout mat; cv::Size pat(7, 7); vector points; float angle; float speed; if(cv::findChessboardCorners(mat, pat, points)) { cv::Point2f avgw(0, 0); //Two floats interpreted as an x and y in the image ///////////////////////////////////////////////////////////////////////// //The outliers are the positions where we have to check for the black dot //the black dot is used as a reference to determine the absolute orientation //of the chessboard //Find the VERTICAL outliers vector avgs; for(int i=0; i

Suggest Documents