AI, Vision and Robotics. AI, Vision and Robotics. Jana Kosecka, ST II Imaging the brain. Sensory Subsystems

AI, Vision and Robotics AI, Vision and Robotics Jana Kosecka, ST II 417 [email protected] , 3-1876 Knowledge representation - how to represent objects...
Author: Simon Briggs
2 downloads 0 Views 9MB Size
AI, Vision and Robotics

AI, Vision and Robotics Jana Kosecka, ST II 417 [email protected] , 3-1876

Knowledge representation - how to represent objects, humans, environments - symbol grounding problem Computer Vision - study of perception - recognition, vision and motion, segmentation and grouping representation Natural Language Processing - provides better interfaces, symbol grounding problem Planning and Decision Making How to make optimal decision, actions give the current knowledge of the state, currently available actions

Imaging the brain

• 100 bilions of neurons, on average, connected to 1 K others • Can be classified into : Sensory, Motor, Central

Sensory Subsystems • Vision (nearly 30-50% ) • Audition (nearly 10%) • Somatic

Motor subsystems • Locomotion • Manipulation • Speech

• Chemical –Taste –Olfaction

Reasoning and Problem Solving Systems

Some history of robotics Robot – 1921 Karel Capek R.U.R play – in the play human-like machines - automatically operated device that replaces human effort Machine with human-like appearance and capabilities - films and literature heros Metropolis (1926), Forbiden Planet (1956) Frankenstein, Science Fiction Movies – Bladerunner, Star Trek, Star Wars Desire to create machines with human like behavior Industrial Robotics Need to create machines to replace humans in tedious task or in dangerous or hardly accesible environments

Robots in manufacturing and material handling Manhattan project (1942) – handling and processing of radioactive materials Telemanipulation - storage, transport delivery - table top tasks, material sorting, part feeding – conveyor belt - microelectronics, packaging - harbor transportation - construction (automatic cranes) Suitable for hard repetitive tasks – heavy handling or fine positioning Successful in restricted environments, limited sensing is sufficient AGV’s - automated guided vehicles – pick and delivery tasks navigation and manipulation AUV’s - automated unmanned vehicles

Intelligent Robot Mechanical creature capable of functioning autonomously Three Basic Functional Primitives of the robot • SENSE • PLAN • ACT 1. Takes information from robots sensors – produces output used by other functionalities 2. Takes the processed sensory information – produces commands/directives 3. Takes sensory information or command and directives produces actuator commands Different organization of these functionalities gives rise to Different robot architectures

Space Robotics 50-ties US space program, exploration of planets, collecting samples Astronouts bulky space suits – difficult

NASA, JPL, DARPA – sponsoring agencies Space programs, military application – surveillance, assistance Planetary Rovers – initially controlled by humans - large time delays, - poor communication connections Need for (semi) – autonomy

Teleoperation

Human operator controls the robot Local site – human views the sensory data, sends the commands Remote site – sensors acquire the information

Teleoperation

Entertainment

Problems - Cognitive fatigue, simulator sickness

Real and animated creatures – used robotic modelling and control techniques, films, computer games, animation

Telepresence – enable the operator to experience the reality - multiple sensors, visual, force feedback - towards Virtual Reality

Toy Industry Furby’s, Aibo’s – interactive animal-like human-like creatures

Robotic Surgery Mobile Robots - courier in buildings and hospitals, vacuum cleaners, - security applications, pick and delivery – warehouses - navigation tasks - exploration tasks Antractica Exploration, Mars, Volcanos

Variety of domains and tasks • • • •

Manufacturing Medicine (da Vinci) Household robots Space robots

• • • •

Search and rescue tasks Educational robots – office delivery agents Automotive industry

• Types of robots mobile robots manipulators unmanned land/air vehicles underwater vehicles planetary rovers

Games and Entertainment

Furbies

Aibos Latter & Macaron

Aibo soccer league - RoboCup

Humanoid Robots

Rhino – First Museum Tour giving robot University of Bonn (’96)

MIT Cog Project

What makes a robot ? • • • •

Sensors Actuators-Effectors Locomotion System Computer system – Architectures

State Description of the systems changes over time External state - state of environment, temperature, sunny presence of obstacles, people in the room

•Sensors

Internal state – state of the robot - position,orientation, force, battery charge - happy, sad, hungry, state can be stored

active sensors - sonar, laser range finder passive sensors - cameras tacticle sensors GPS, Differential GPS

Sensors are necessary to sense the state – state can be determined By measuring some physical quantity – voltage, current, distance … Sensors which measure properties of the environment – sonars, cameras Sensors which measure state of the robot – inertial sensors, odometry, acceleration Active/Passive sensors – send some energy or modify the environment to sense, make passive observations to listen

Crucial for robots – sensing is the hardest part – sensing capabilities Determine the complexity of the tasks and how well we can do them

Proprioceptive sensors - measure its own state (shaft encoders) inertial sensors Odometry - measurements of the distance travelled

Effectors

System View

Robot can change it’s state of the world by means of effectors Actuators for locomotion Actuators for manipulation Joints - revolution joints, prismatic joints

Input

environ environ ment ment

Output

Convert software commands into physical actions (hydraulic, electric, pneumatic) - Domain of mechanical engineering – new actuator designs (weight, flexibility) In abstract sense - they define Degrees of Freedom (DOF) UAV - 6 DOF (x,y,z - roll, pitch, yaw) Mobile robot 3 DOF (x,y theta) Distinction between effective DOF and controllable DOF (e.g. car)

Modeling Dynamical Systems

Behavior - (Input, Output) Possibly infinite signals (f: Domain ‡ Range )

Continuous time-invariant dynamical systems

In general Time, State, Inputs, Outputs Input (control) function

State transition function

Output function

Discrete time-invariant dynamical systems

Behavior examples

Elementary building blocks - Behaviors Hardware – sensors, actuators Software – computation

Motivation Valentino Braitenberg: Vehicles >> vehicles with different personalities Walter Grey: Tortoise analog implementation, one sensor per one effector >> light seeking behavior

Representation of behaviors

• • • •

Functional representation r = b(s) robot schemas Lookup table Stimulus/response diagrams Discrete and/or continuous representations (differential equations or if-then rules ->wall-following example)

Sensing/control elementary building blocks What are behaviors: 1. Behaviors are feedback controllers 2. Behaviors are executed in parallel 3. Achieve specific goals (avoid-obstacles, go-to-goal) 4. Can be combined to achieve more complex networks (make inputs of one behavior, outputs of another) 5. Behaviors can be designed to look-ahead, build and maintain representation of the world

Vision based automated steering • Model based vision - highway • automated steering

Vision for car following • Motion and stereo cues • tracking vehicle ahead • throttle and brake control

Vision for lateral control

Vision in the lateral control loop

L #

" yL

road curvature

!

Rref

L

y(t)

Vision dynamics yl offset at the look-ahead ! angle at the look-ahead

Vehicle

Vision System

2

1/s

Controller • • • •

control and measurements at the look-ahead presence of the delay in vision processing performance specification - tracking error, maximum error, passenger comfort

Car following

Scenario Vision sensing Obstacle detection

1. Affine reconstruction 2. Motion computation 3. Triangulation Angle

Delay

Lane detection

Steering actuator Lane following

Throttle actuator Velocity tracking

Lane change

Planar scene

Lane change

3D affine reconstruction Distance

Car detection

Lane following Car-ahead following

NAHSC DEMO’97 (1200 rides)

Helicopter landing – UAV’s

Obstacle Detection Stereo and projection Representation of the free space

Navigation Experiment Control issues

xçd =

Rate: 10Hz Accuracy: 5cm, 4o

Individual behaviors

vy

Fr lr

Behavior

Robot dynamics

y

• Landing • Steering • Obstacle avoidance • relative positioning

Stimulus

àr ?

response

lf

Ff

$ %

" vx

x

Dynamic model of the vehicle (16 DOF) Possible to decouple lateral and longit. dynamics

pose

Vision and Control

Different Architectures Architecture determines how the robot behaviors are structured. Historically

VisionBased BasedControl Control Vision ImageBased BasedTechniques Techniques Image

1. Deliberative (look-ahead, think/reason, plan , act) AutomatedLanding Landing Automated Driving Applications Driving Applications

2. Reactive (no look-head : react) 3. Behavior-based (distribute thinking over acting)

SystemIssues: Issues: System Composition of the elementary Composition of the elementary Control strategies Control strategies Systemproperties properties System

Lane change left join

Lane change right Leader mode entry

split

Follower mode

4. Hybrid architecture (slow thinking level fast reaction level)

exit Off

Deliberative Architecture

Reactive Architecture

Sense – plan – act , sense – plan – model – act sensing

reasoning

Model/knowledge base Shakey 1969,

control

Environment

Sensing

Actuators

Suggest Documents