Dipartimento di Elettronica Informazione e Bioingegneria Cognitive Robotics

Dipartimento di Elettronica Informazione e Bioingegneria Cognitive Robotics Planning agents: from situation calculus to STRIPS @ G. Gini 2015 3 app...
Author: Wesley Griffin
1 downloads 2 Views 442KB Size
Dipartimento di Elettronica Informazione e Bioingegneria

Cognitive Robotics Planning agents: from situation calculus to STRIPS @ G. Gini 2015

3 approaches to cognition • Symbol based models ƒ

Logic based (for action planning as STRIPS, SAT, …) ƒ Geometry based (for path planning)

• Probabilistic models ƒ

for localization, mapping, path planning

• Neural models ƒ

for sensory-motor control

G. Gini

2

Symbolic models

3

• Knowledge based models are well exemplified by the AI symbolic approach • AI applies to rational agents ƒ

it applied to robots too for planning

PLANNING • Planning is different in discrete and continuous domains: ƒ

action planning is a core AI problem ƒ path planning is a geometric and optimization problem ƒ trajectory planning is a control problem G. Gini

Early models of intelligence

4

• Perceive-think-act model of intelligence (Kenneth Craik, 1943, The Nature of Explanation )

Perceive

Think

Act

• he laid the foundation for the concept of mental models ( the mind forms models of reality, and uses them to predict similar future events) This model was very influential in early AI - No absolute magnitudes are stored but changes and ordering G. Gini

AI and agents

5

Cognitive modelling

Thinking humanly Acting humanly Turing test

Laws of thought

Thinking rationally Acting rationally Rational agent

G. Gini

Acting humanly: Turing Test

6

• Turing (1950) • "Can machines think?" Æ "Can machines behave intelligently?" • Operational test for intelligent behavior: the Imitation Game

G. Gini

Thinking humanly: cognitive modeling 7 • 1960s - psychology • Requires scientific theories of internal activities of the brain • How to validate the theory? 1) Predicting and testing behavior of human subjects (top-down) 2) Direct identification from neurological data (bottom-up) • Both approaches (Cognitive Science and Cognitive Neuroscience) are now distinct from AI

G. Gini

Thinking rationally: "laws of thought" • • • •

Aristotle: what are correct arguments/thought processes? Logic notation Direct line: mathematics and philosophy -> AI Problems: 1. Not all intelligent behavior is mediated by logical deliberation 2. What is the purpose of thinking? What thoughts should I have?

G. Gini

8

Acting rationally: rational agent • Rational behavior: doing the right thing ƒ

The right thing: that which is expected to maximize goal achievement, given the available information

• Doesn't necessarily involve thinking but thinking should be in the service of rational action

G. Gini

9

Agents and environments

10

• An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators • The agent function maps from percept histories to actions: [f: P* Æ A] • The agent program runs on the physical architecture to produce f agent = architecture + program G. Gini

Rational agents

11

• An agent should "do the right thing", based on what it can perceive and the actions it can perform. • The right action is the one that will cause the agent to be most successful. ƒ Rationality is distinct from omniscience ƒ Agents can perform actions in order to modify future percepts so as to obtain useful information ƒ An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt) • Performance measure: An objective criterion for success of an agent's behavior.

G. Gini

Environment types

12

• Fully observable (vs. partially observable): An agent's sensors give it access to the complete state of the environment at each point in time. • Deterministic (vs. stochastic): The next state of the environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic) • Episodic (vs. sequential): The agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself.

G. Gini

Agent types • 1. 2. 3. 4.

13

Four basic types in order of increasing generality: Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents

G. Gini

1. Simple reflex agents

G. Gini

14

2. Model-based reflex agents

G. Gini

15

3. Goal-based agents

G. Gini

16

4. Utility-based agents

G. Gini

17

planning

18

We deliberate not about ends, but about means. … We assume the end and think about by what means we can attain it. If it can be produced by several means, we consider which one of them would be best …[and then] we consider by which means that one can be achieved, until we come to the first cause (which we will discover last). Aristotle: in Nicomachean Ethics (Book III. 3, 1112b).

• This appears to be a description of what today we call ‘top-down search.” G. Gini

planning

19

• “Planning can be interpreted as a kind of problem solving, where an agent uses its beliefs about available actions and their consequences, in order to identify a solution over an abstract set of possible plans” -Russel and Norvig Artificial Intelligence, a modern approach

G. Gini

AI and planning ƒ Action representation and planning

algorithms • First order logic: situation calculus • plus extra structures: STRIPS • Non linearity of plans: POP, graphplan • Propositional logic planning: SAT • Planning languages: STRIPS, ADL and PDDL G. Gini

20

21

AI Planning Percepts

World

perfect vs. noisy

Actions sole source of change vs. other sources

???? fully observable vs. partially observable

deterministic vs. stochastic instantaneous vs. durative G. Gini

Classical Planning Assumptions Percepts

World

Actions sole source of change

perfect fully observable

22

????

deterministic instantaneous

G. Gini

Blocks world The blocks world is a micro-world a table a set of blocks (same size) a robot hand. Some domain constraints: ƒ Only one block can be on another block ƒ Any number of blocks can be on the table ƒ The hand can only hold one block

G. Gini

23

Blocks world representation PREDICATES ƒ ON(A,B) ƒ ONTABLE(A) ƒ CLEAR(A) ƒ HOLDING(A) ƒ ARMEMPTY

ACTIONS UNSTACK(A,B) STACK(A,B) PICKUP(A) PUTDOWN(A)

C A

G. Gini

B

24

State space for blocks

G. Gini

25

Situation calculus

26

• Intuition: Represent the planning problem using first-order logic

• Situation calculus lets us reason about changes in the world ƒ Have an initial situation ƒ Have a final wanted situation ƒ Use theorem proving to “prove” that a

particular sequence of actions, when applied to the initial situation, leads to the desired result G. Gini

Situation Calculus - Overview Situation Calculus (McCarthy, Hayes, 1969) A Predicate Calculus (FOL) language formalizing situations (states), actions, and their effects. ƒ Actions denote changes of the world - are referred to by a name and a parameter-list ƒ Situations refer to worlds - can be used to represent a (possible) world history for a given sequence of actions. ƒ The special function do expresses that an action is applied in a situation. ƒ The effect (changes) and frame (remains) of an action are specified through axioms.

G. Gini

27

Situation Calculus - Overview ƒ Planning in situation calculus involves theoremproving, to infer a goal situation from the initial situation: ƒ the actions involved in a proof ƒ the bindings of their parameters ƒ represent the plan.

G. Gini

28

Situations

29

• A situation corresponds to a world (state). • Situations are denoted through FOL terms: e.g. s, s' • Actions transform situations, i.e. the application of an action in a given situation s yields a situation s'. Example:

A

s0 = {on(A,B),on(B,Fl),clear(A),clear(Fl)}

B

on(A,B,s0),on(B,Fl,s0),clear(A,s0), clear(Fl,s0)

A

Action: move (A, B, Fl) s1 = {on(A,Fl), on(B,Fl), clear(A),clear(B),clear(Fl)}

on(A,F,s1),on(B,Fl,s1),clear(A,s1),clear(B,s1),clear(Fl,s1) G. Gini

B

goal

30

• A goal is described by a sentence (wff): a block on B (∃x )On( x, B ) • Planning: finding a set of actions to achieve a goal wff. ƒ

So state

G. Gini

Actions

31

• denote an action by a symbol • move (x,y,z) - action schema • actions are functions ƒ

Example: ƒ move(B,A,F1): move block A from block B to F1

• do: function that maps actions and states into states ƒ

do(α , σ ) → σ 1 action

state

G. Gini

Actions - effects effects of actions ƒ Positive: describes how action makes a fluent true ƒ Negative: describes how action makes a fluent false positive effect axiom:

[On ( x, y , s ) ∧ Clear ( x, s ) ∧ Clear ( z, s ) ∧ ( x ≠ z ) → On ( x, z, do( move( x, y , z ), s ))]

negative effect axiom : [On( x, y, s ) ∧ Clear ( x, s ) ∧ Clear ( z , s ) ∧ ( x ≠ z ) → ¬On( x, y, do(move( x, y, z ), s ))] ƒ

Antecedent: pre-condition for actions ƒ Consequent: how the fluent is changed G. Gini

32

Actions

33

• Precondition are satisfied with • B/x, A/y, S0/s, F1/z •

What was true in S0 remains true

G. Gini

.

Frame Axioms

34

• Not everything true can be inferred On(C,F1) remains true but cannot be inferred

• Actions have local effect ƒ

We need frame axioms for each action and each fluent that does not change as a result of the action ƒ example1: frame axioms for (move, on): if a block is on another block and move is not relevant, it will stay the same. • positive:

[On ( x, y , s ) ∧ ( x ≠ u )] → On ( x, y , do( move(u, v, z ), s )) • negative

(¬On ( x, y , s ) ∧ [( x ≠ u ) ∨ ( y ≠ z )]) → ¬On ( x, y , do( move(u, v, z ), s ) G. Gini

Frame Axioms ƒ

35

Example2: Frame axioms for (move, clear): Clear (u, s ) ∧ (u ≠ z ) → Clear (u, do ( move ( x, y , z ), s ))

¬Clear (u, s ) ∧ (u ≠ y ) → ¬Clear (u, do( move ( x, y , z ), s ))

ƒ

The frame problem: need axioms for every pair of {action, fluent}! • There are languages that embed some assumption on frame axioms that can be derived automatically, as in case of non monotonic reasoning.

G. Gini

Other problems

36

• The qualification problem: qualifying the antecedent for all possible exceptions. Needs to enumerate all exceptions ƒ

~heavy and ~glued and ~armbroken Æ can-move

• The ramification problem: ƒ

If a robot carries a package, the package will be where the robot is. ƒ But what about the frame axiom, when can we infer about the effect of the actions and when we cannot.

G. Gini

Generating plans in situation calculus37 • To generate a plan to achieve a goal, demonstrate a theorem • Example: Get block B on the floor • Prove: by resolution refutation:

∃s.On ( B, F1 , s ) ∃s¬On(B, F1, s)

G. Gini

38

Blocks world

• • • • •

Clear (X, do(A,S)) ↔ [Clear (X, S) ∧ (¬(A=Stack(Y,X) ∨ A=Pickup(X)) ∨ (A=Stack(Y,X) ∧ ¬(holding(Y,S)) ∨ (A=Pickup(X) ∧ ¬(handempty(S) ∧ ontable(X,S) ∧ clear(X,S))))] ∨ [A=Stack(X,Y) ∧ holding(X,S) ∧ clear(Y,S)] ∨ [A=Unstack(Y,X) ∧ on(Y,X,S) ∧ clear(Y,S) ∧ handempty(S)] ∨ [A=Putdown(X) ∧ holding(X,S)] meaning: A block is clear if a) in the previous state it was clear and we didn’t pick it up or stack something on it, or (b) we stacked it on something else successfully, or (c) something was on it that we unstacked successfully, or (d) we were holding it and we put it down. G. Gini

39

Analysis

• Fine in theory, but problem solving (search) is exponential in the worst case • Resolution theorem proving only finds a proof (plan), not necessarily a good plan

• IDEA: restrict the language and use a special-purpose algorithm (a planner) rather than general theorem prover G. Gini

…for real robots

40

• By the 1960’s we had ƒ

Simple vision systems ƒ Simple theorem provers (using Robinson resolution principle) ƒ Simple path planning methods

Perceive-act-think IDEA: put them all together in a robot

SHAKEY Project (SRI International) G. Gini

Shakey: key ingredients • world model based on First Order Predicate Logic • geometric planning to avoid obstacles • simple error recovery • major error recovery done by updating the world model

G. Gini

41

architecture

sensors →



P E R C E P T I O N

42

M O D E L L I N G

P L A N N I N G

E X E C T I O N

The functions are activated in sequence

G. Gini

C O N T R O L

→ actuators

Shakey the robot •

1970-Shakey



Built at Stanford Research Institute, Shakey was remote controlled by a large computer. It hosted a clever reasoning program fed with very selective spatial data, derived from weak edge-based processing of camera and laser range measurements. On a very good day it could formulate and execute, over a period of hours, plans involving moving from place to place and pushing blocks to achieve a goal.





- Hans Moravec

G. Gini

43

Shakey: world model

44

• logical representations type(r1,room) in(shakey,r1) in(o1,r2) type(d1 door) type(o1 object) type(f3 face) type(shakey) at(o1 15.1 21.0) joinsfaces(d2 f3 f4) joinsrooms(d2 r3 r2) …

30

r2

f4 f3 r3

20

o1

d2

f2 f1

d1 10

shakey r1 0 0

G. Gini

10

20

STRIPS - operators • specialised representations (extra logic) to be faster • actions represented using STRIPS operators block_door(D,Y) preconditions:

delete list: add list:

in(shakey,X) & in(Y,X) & clear(D) & door(D) & object(Y) clear(D) blocked(D,Y)

G. Gini

45

A problem in STRIPS • Initial state = conjunction of positive literals • Goal state = conjunction of positive literals (on B C)(on A B) • Action schema: ƒ pre-conditions = conjunction of positive literals ƒ post-conditions = conjunction of positive and negative literals

• No state variable • Only one world model

G. Gini

46

STRIPS Representation the initial state is represented by a set of positive facts, STRIPS can be viewed as a way of specifying an update to this set

G. Gini

47

Closed world assumption C

A

B

(ontable A) (on C A) (ontable B) (CLEAR C) (CLEAR B)

(not(on A C) and (not(CLEAR A)) are not listed but assumed to hold

G. Gini

48

ACTIONS for blocks •







STACK(x y) P: CLEAR(y) ∧ HOLDING(x) D: CLEAR(y) ∧ HOLDING(x) A: ARMEMPTY ∧ ON(x, y) UNSTACK(x y) P: ON(x, y) ∧ CLEAR(x) ∧ ARMEMPTY D: ON(x, y) ∧ ARMEMPTY A: HOLDING(x) ∧CLEAR(y) PICKUP(x) P: CLEAR(x) ∧ ONTABLE(x) ∧ ARMEMPTY D: ONTABLE(x) ∧ ARMEMPTY A: HOLDING(x) PUTDOWN(x) P: HOLDING(x) D: HOLDING(x) A: ONTABLE(x) ∧ ARMEMPTY

G. Gini

49

STRIPS Representation • Example: ƒ

Given the initial state ƒ All instantiations of the parameter x that satisfy the precondition (and (handempty) (clear x) (ontable x))

produce a different action (transition) that can be applied to the initial state. ƒ Actions whose preconditions are not satisfied are not legal transitions.

G. Gini

50

STRIPS Representation handempty clear(A) ontable(A) clear(B) ontable(B) clear(C) ontable(C)

pickup(A)

pickup(B)

Initial State

pickup(C) G. Gini

handempty clear(A) ontable(A) holding(A) clear(B) ontable(B) clear(C) ontable(C) handempty clear(A) ontable(A) clear(B) ontable(B) holding(B) clear(C) ontable(C)

51

STRIPS Representation

52

• The properties of the initial state and the operators imply that from a finite collection of operators it is possible to determine the finite collection of actions that can be applied to the initial state. ƒ

In each successor state generated by these actions, we can once again evaluate all logical formulas, and thus once again determine the set of all applicable actions.

G. Gini

Planning

53

• goal regression: ƒ find an action that directly achieves your goal, and then actions

to achieve the first action’s preconditions, etc… ƒ e.g. blocked(d1,X)

30

block_door(D,Y) preconditions: in(shakey,X) & in(Y,X) & clear(D) & door(D) & object(Y) delete list: clear(D) add list: blocked(D,Y)

r2

f4 f3 r3

20

o1

d2

f2 f1

d1 10

shakey r1 0 0 G. Gini

10

20

Plan generation •

Graph search ƒ

• • • •

54

Integrates forward and backward search

Find set of differences (compare initial and final state, list literals of final state not in the initial state) Select the first difference Find an operator to reduce it If the operator cannot be applied in the current state, add the preconditions in the list of differences ƒ ƒ

Subproblem: find a plan to reach a state where the operator can be applied Subproblem: find a plan to go from the post-conditions of the operator to the final state

G. Gini

Means-ends(current, goal)

55

MEANS-ENDS(CURRENT, GOAL) Compare CURRENT and GOAL. IF no differences THEN terminate ELSE select the major difference REPEAT select O not processed and applicable IF no O THEN fail ELSE apply O to CURRENT. Get new state O-START (preconditions are satisfied) and O-RESULT (state after application of O to O-START). IF (FIRSTpart ←MEANS-END(CURRENT,O-START)) and (LASTpart ←MEANS-END(O-RESULT, GOAL)) succeed THEN return success and concatenation FIRSTpart, O, LASTpart UNTIL success or fail

G. Gini

Support for STRIPS learn macro operators - substitute constants with variables

• Triangular table • represents a plan operators:

56

of

N

ƒ Column 0 keeps the initial state, ƒ Column j has on top the jth

operator in the plan ƒ Row 1 has the first operator in the plan, ƒ row N+1 the Nth operator

G. Gini

Triangular table •

• • • •

57

elements e (i, j), for j > 0 , i < N+ 1 are the assertions added by jth operator and used in the preconditions of the ith operator element (i, 0) for i < N+ 1 are assertions of the initial state used by ith operator (N+1)-th row is the final state Row on the left of ith operator: preconditions Column below ith operator: add-list

G. Gini

58

nucleus

ith nucleus = intersection of rows from i with columns on the left of ith operator - it contains a state representation correct to apply the rest of the plan from ith operator The 0th nucleus is the initial state The (N+1)th nucleus is the description of the final state

4th nucleus G. Gini

example ƒ ƒ ƒ ƒ ƒ

59

We execute the first 4 operators. Then the robot has to pick up A, but instead it picks up B if a sensory system detects the state, HOLDING(B) is added and ON(B, C) is deleted As a consequence the highest nucleus is the 4°, The action to redo is STACK(B, C)

G. Gini

In conclusion STRIPS …

60

• Shakey could learn to chunk useful sequences of actions into single large actions (macroperators) • But STRIPS was slow and weak • FINALLY, the Sussman anomaly, the pittfall of classical planning

G. Gini

6 7 7 ]

Reading List for planning

61

PLANNING in general Chapter 11 of the book: Artificial Intelligence: A Modern Approach (Second Edition) by Stuart Russell and Peter Norvig, Prentice Hall D.Weld, “Recent advances in AI planning", AI Magazine, Volume 20, No. 2, Summer 1999, pp 93-123. Graphplan Avrim Blum and Merrick Furst “Fast planning through planning graph analysis", Artificial Intelligence, vol 90, pp 281-300, 1997 SAT planners 2 see in Weld, and H. Kautz, B. Selman, “Planning as satisfiability”, ECAI 92 H. Kautz, D. McAllester, B. Selman, “ Encoding Plans in Propositional Logic”, KR 96. R o b o t i c a

http://www.cs.rochester.edu/~kautz/satplan/index.htm. G. Gini

Suggest Documents