scripting language for multi-level control of autonomous agents in a driving simulator

scripting language for multi-level control of autonomous agents in a driving simulator J. Miguel Leitão1,2, A. Augusto Sousa 2,3 and F. Nunes Ferrei...
Author: Denis Brooks
3 downloads 1 Views 84KB Size
scripting language for multi-level control of autonomous agents in a driving simulator J. Miguel Leitão1,2, A. Augusto Sousa

2,3

and F. Nunes Ferreira

3

1

DEE.ISEP Departamento de Engenharia Electrotécnica, Instituto Superior de Engenharia do Porto Rua S. Tomé, 4200 Porto, Portugal Tel: +351 936 6058921, Fax: +351 2 8321159 [email protected] 2

INESC Porto Instituto de Engenharia de Sistemas e Computadores, Porto, Portugal 3

DEEC.FEUP Departamento de Engenharia Electrotécnica de Computadores, Faculdade de Engenharia da Universidade do Porto

In: DSC’99 - Driving Simulation Conference 7-9 July, 1999 Paris, France

Abstract This paper presents some of the developments we made with the goal of allowing a friendly control and simulation of a large number of autonomous agents based in behavior in interactive real-time systems. Our work has been specially oriented to the simulation and control of autonomous vehicles and pedestrians in the preparation of scenarios to driving simulation experiments in the DriS simulator. Because every element is intrinsically autonomous, only a few of them are usually addressed to implement the desired study event. Our scripting language is based in Grafcet, a well known graphical language used in the specification and programming of industrial controllers. Our technique allows the imposition of both short time orders and long time goals to each autonomous element. Orders can be triggered reactively using sensors that monitor the state of virtual traffic and configurable timers that generate all the necessary fixed and variable time events.

Resumé

Ce document présente les développements que nous avons fait dans le but de permettre un contrôle et une simulation amiables dans des systèmes interactifs en temps réel d’un grand nombre d’agents autonomes basés sur comportements. Notre travail est spécialement orienté vers la simulation et le contrôle des piétons et des véhicules autonomes dans la préparation des scénarios pour des expériences de simulation de conduite. Comme chacun des éléments est intrinsèquement autonome, il suffit que seulement quelques uns soient concernés pour l'implementation de la situation que l'on veut étudier. Notre language de script fondé sur le Grafcet, un language grafique très connu et utilisé dans la spécification et dans la programmation de contrôleurs industriels. Notre technique permet de donner des ordres immédiats et aussi d'attribuer des objectifs lointains pour chacun des éléments autonomes. Les ordres peuvent être déclenchés d'une façon réactive en utilisant des détecteurs qui monitorisent l’état du trafic virtuel et pour des temporisateurs qui produisent tous les événements de temps fixe ou de temps variable.

1.

Introduction

Most driving simulation experiments where the main subject of study is the human behavior require both realistic scenarios and precisely controlled events. The preparation of such environments is one of the most time and effort consuming task in the scientific use of driving simulators. The possibility to include autonomous elements that react to each other using behavior can dramatically simplify the creation of realistic scenarios but it is usually not enough because strictly autonomous elements do not help the construction of precisely controlled events. In order to make the goal situations happen, some externally controlled elements must be used. The addition of behavior based objects has two main advantages but also two great difficulties. With behavior based elements, it is easier to prepare scenarios, especially if they have a large amount of moving objects. Besides, a simulation environment with behavior based objects tends to be more realistic and believable. But due to their autonomy, these behavior-based elements, when available, are usually difficult to control. The development and control of behavior based virtual elements is still only allowed to programmers. Although all the recent work and development of realistic autonomous agents in interactive systems, there are no interactive, friendly or easy to use editors of behavior that can be used to develop reactive objects. This paper presents some of the efforts we made with the goal of allowing a friendly control of a large number of autonomous agents in an interactive real-time system. Our work has been specially oriented to the control of autonomous vehicles and pedestrians in the preparation of scenarios for driving simulation experiments in DriS – Driving Simulator [12][18][19]. We present a scripting technique of autonomous elements that allows a friendly control of the autonomous elements that compose the scenario. Because every element is intrinsically autonomous, only few of them are usually addressed to implement the desired study event. Our scripting language is based in Grafcet, a well known graphical language used in the specification and programming of industrial controllers. Our technique allows imposing both short time orders and long time goals to each autonomous element. Orders can be triggered reactively using sensors that monitor the state of virtual traffic and configurable timers that generate all the necessary fixed and variable time events. Some related works are summarized in Section 2. Section 3 justifies the need for autonomous elements in driving simulators and their requirements and describes the structure of our autonomous agents. Section 4 introduces Grafcet, a graphical language based on Petri nets used in the specification and programming of industrial controllers. Section 5 present our approach for scripting and controlling autonomous elements in driving simulation experiments. A friendly graphical editor aimed to the preparation of driving simulation experiments is also introduced. Section 6 presents some example implementations of driving simulation experiments using the proposed scripting method.

2.

Related Work

The presence of goal driven, behaviour based autonomous agents is already identified as one important requirement for virtual reality systems [15]. Many works are known that try to reproduce believable human figures [8][5][7][16], animals [6] or generic legged creatures [9][13]. Several of these works focus only on few levels of a behaviour based autonomous agent like animation or path planning, but others propose global architectures of generic multi-level autonomous agents [2][3][4]. Driving simulators are real-time systems that hardly need believable autonomous elements, usually in large amount [1]. Currently, several works can be found that aim to help the preparation of complex and realistic scenarios. Some of them are focused in improving the autonomy and realism of autonomous guided virtual vehicles and pedestrians [10], others propose scripting languages or techniques to help directing and controlling autonomous elements [17][25][26]. The development of autonomous agents aimed to live in the real world, especially automatic guided traffic vehicles [11][14] is also related to this work.

3.

Autonomous Agents

In order to be convincing, a driving simulatior must supply the conductor with the sensation of immersion in a typical environment of public ways. These typical environments are composed by a large amount of different vehicles, each one with its own unique behaviour and all of them interacting and competing. The preparation of driving simulation scenarios within a such environment is one of the most time and effort consuming tasks in the scientific use of driving simulators. The possibility to include autonomous elements that react to each other using its internal defined behavior can dramatically simplify the creation of realistic scenarios. But this is usually not enough because strictly autonomous elements do not help the construction of precisely controlled events. In order to make goal situations happen, some externally controlled elements must be used. These controlled elements must be guided by a program or script, consisting of a command sequence. The preparation of driving simulation scenario can then be described as the creation of these command lists that will then be used to control only a small set of all animated elements in the environment. In order to simplify the scenario preparation task, it is important that these commands can be related to orders of diverse abstraction levels. This allows a potential reduction on the amount of commands that must be specified in order to force the desired situation happen. Depending on the experiment specification, this order can be an alteration of destination, behavior, acceleration, speed or manipulation of the autonomous vehicle instruments.

3.1. Architecture Many driving simulators try to solve autonomous elements with two different models. One behavior based model to be applied to strictly autonomous vehicles used to simulate de environment traffic, and one script based model used to drive preprogrammed vehicles. As explained in [20], our approach intends to solve both requirements with the same model. Actually, our model can implement strictly autonomous vehicles, completely controlled elements, or mixed solutions with externally imposed medium time goals and short time autonomy. As many others, our autonomous agents are conceived using several independent layers. This allows treating independently long time goals and short time tasks. It also simplifies the imposition of external orders that can correspond to different levels too. Figure 1 shows the hierarchical structure of layers used in the implementation of our autonomous agents. The strategic layer has to deal with decisions that can lead in reaching long time goals. When the vehicle is requested to go to a final destiny, the strategic layer must plan the path of the trip and communicate periodically to the tactical layer the direction to take at each crossing. The tactical layer deals with the decisions that are necessary to achieve the medium time goals suggested by the strategic layer. It must take decisions like to overtake or not the front vehicle and when to change to other lane. These decisions are sent to the operational layer. Driver

Vehic le

Figure 1. Behaviour model for an autonomous vehicle.

The operational layer takes the immediate measures necessary to achieve the requested short time goals asked by the tactical layer. It decides which pedal to push, how to affect the steering wheel, etc…

The bottom layer corresponds to the dynamic behavior of the automobile. The three upper layers are related to the behavior of the driver.

4.

Grafcet Language

Grafcet is graphical language based on Petri nets that is used in the specification and programming of industrial controllers [22][23][21][24]. It allows describing the working cycle of automatic elements using a set of: § steps, to which some actions may be associated § transitions § oriented connections that link two or more steps

4.1. Steps and Actions One step identifies one single situation when the all or part of the system remains stable. In Grafcet, one step is represented by a rectangle with an identifier (usually a number) inside (figure 2). Each step may be active or inactive. At each moment, the state of the automatic system can be completely described by the active steps set.

Figure 2. A single Grafcet step

Actions that must be taken by the controller system when a step is active can be described using text or symbolic language inside a rectangle drawn at the side of the step (figure 3).

7

L1 On M2 Go P5 Stop

Figure 3. Actions related with a step

Grafcet also allows the specification of initially active steps using double lines (figure 4a). Several initially active steps are allowed in a single Grafcet diagram. It also allows the specification of macro steps. that are represented using double vertical lines and may be described in a separate diagram (figure 4b).

L1 On M2 Go P5 Stops

7

7

Travel Cx

(a)

(b)

Figure 4. (a) Initial active step;

(b) Macro step

4.2. Transition conditions A transition refers to a possibility of evolution among steps. One condition controls the moment in which each transition may be made. The condition is usually written as a boolean function of sensors and inputs states, near a small thick line (figure 5). L1 On M2 Breaks

7 S + B2

8

P5 Go

Figure 5. Condition for the transition

4.3. Connections Connections among steps are oriented and irreversible. The can have several topologies: § Sequencial § OR Separations § AND Separations § OR Junctions § AND Junctions When a sequential connection is used (figure 5), the transition is made if the precedent step (step 7) is active and the related condition (S+B2) is positively evaluated. In this case, step 7 is deactivated and step 8 is activated. OR Separation

An OR Separation can be represented by a horizontal line to where a single connection arrives and from where two or more connections leave (figure 6). In such a separation, the transition is only done when the preceding step (7) is active and at least one condition is true. The transition is made to only one step depending on the condition that first become true.

L1 On M2 Go

7

B2

B1

8

P5 Break

9

P6 Off

Figure 6. OR Separation OR Junction

An OR Junction can be represented by a horizontal line to where several connections arrive and from where a single connection leaves (figure 7). In such a junction, the transition is done when any preceding step (8 or 9) is active and the respective condition is true.

8

9

P5 On

P6 Up B2

B1

L1 Break M2 Stop

10

Figure 7. OR Junction AND Separation

An AND Separation can be represented by a horizontal double line to where a single connects arrives and from where two or more connections leave (figure 8). In such a separation, the transition is only done when the preceding step (7) is active and the condition is true. The transition is made simultaneously to all destination steps (8 and 9). L1 On M2 Off

7 B

8

P5 Go

9

P6 Break

Figure 8. AND Separation

AND Junction

An AND junction can be represented by a horizontal double line to where several connections arrive and from where a single connection leaves (figure 9). In such a junction, the transition is only done when all preceding steps (8 or 9) are active and the respective condition (B) is true.

8

9

P5 On

P6 On

B

10

L1 Go M2 Off

Figure 9. AND Junction

5.

Scripting Autonomous Agents

A generic control system can be described as a state machine that uses the information provided from the input set to perform the required transitions of state and to evaluate the output information sent to the controlled system. Usually, the input information is received from sensors that are implemented in hardware. For the control of autonomous agents in driving simulators, we propose a similar state machine specified in the Grafcet graphical language but we implemented a set of “software” sensors that are available for use for the experiment preparation. This set includes several simple absolute position detectors that allow the sensing of the longitudinal and the lateral positions on the road. Relative positions can be treated by calculating the difference between two vehicle's positions. The same method can be applied to velocities and accelerations. Table 1. presents some implemented sensors. Table 1. Some implemented sensors

Sensor Pos(A) LPos(A) OPos(A) Dist(A,B) LDist(A,B) ODist(A,B) Speed(A) Wheel(A) Break(A)

Description Absolute position of vehicle A Longitudinal position of vehicle A Offset (lateral) position of vehicle A |Pos(A)-Pos(B)| LPos(A)-LPos(B) OPos(A)-OPos(B) Velocity of vehicle A Position of A's steering wheel Position of A's break pedal

These sensors are used to control the evolution among steps, according to the specified Grafcet diagram. Depending on the active steps set, some orders are assigned, in order to perform the desired actions. These orders are assigned by imposing a new value to an internal parameter. Because the autonomous driver has a layered architecture, and most of its internal variables may be used as a controlled parameter, orders may also be chosen from a set of several levels of commands. It is possible to affect directly the position or the velocity of a vehicle, to change engine's parameters or driver's behavior or to emulate driving actions. These are some parameters that may be affected: § Goal velocity § Destination § Maximum acceleration § Position of the steering wheel § Position of pedals § Power of engine § Velocity § Acceleration § Position and orientation To simplify the experiment preparation task, we developed a small Grafcet editor that helps the creation of generic Grafcet schemas (figure 10). This editor was made with MS-Visual C++, and runs in an MS-Windows environment. It provides all the classical edition functions such as save, load, new and print. Dedicated Grafcet tools are accessible by a toolbox on the left. We also developed the required tools to allow importing the design of one or more graphs into the simulator.

Figure 10. Grafcet Editor

6.

Examples

A typical driving simulation experiment is presented in figure 11 and can be described as follows.

A

B

C

Figure 11. Concurrent overtaking experiment

The goal situation we want to study is the reaction of a driver performing an overtaking maneuver when the vehicle being overtaken starts a concurrent overtaking maneuver. This can be exemplified in a 2+2 lanes road, with three vehicles running through the right lane. When the last vehicle (A) is overtaking the middle vehicle (B), this one can also start a maneuver to overtake the leading vehicle (C). In this case, the driver of the last vehicle (A) must quickly decide what to do. Available options can be summarized in: § Speeding up, in order to force the started overtaking maneuver. § Turning left, in order to avoid vehicle B. § Slowing down, aborting the intended maneuver. This decision will be the main subject of this study. The above situation can be implemented in a simulated straight road. The first two vehicles are autonomous vehicles and the human driver under test drives the last one. Although both B and C vehicles can be considered autonomous, only C can be simulated to be strictly autonomous. B must be ordered to stay behind C until vehicle A starts the overtaking maneuver and to start another overtaking maneuver precisely in the selected moment. So, C may be ordered to run at constant velocity. Vehicle B can be controlled by the script presented in figure 12.

1

Follow C

A overtaking B

2

Turn Left

A overtaked B

5

Normal

Figure 12. Script to control the concurrent overtaking experiment

At the beginning of the experiment only step 1 is active, allowing B and C vehicles to evolve autonomously, based only on their internal behaviour. When vehicle A gets closer to vehicle B, intending to overtake it, the transition condition becomes true and step 2 is activated. Here, an order is produced to make vehicle B start a concurrent overtaking manoeuvre. The condition A overtaking B may be implemented using a combination of any sensor function available (table 1). Possible implementations are: § A overtaking B = Dist(A,B) < 2 m § A overtaking B = LDist(A,B) > -2 m AND Speed(A) > Speed(B)+10 m/s

A more complex experiment can be implemented using the same specification in a 3 lanes road with all cars running through the middle lane (figure 13).

B

A

C

Figure 13. Concurrent overtaking experiment in a 3 lanes road.

In this case, the subject’s vehicle (A) may decide to overtake leading vehicles through the right lane or through the left lane. In this case, vehicle B must be ordered to turn to the same side, as presentes in the script of figure 14.

1

Follow C

LDist(A,B) > -2 m

2 ODist(A,B) < -1 m

3

Turn Left

LDist(A,B) > 2 m

5

Follow C

ODist(A,B) > 1 m

4

Turn Right

LDist(A,B) > 2 m

Normal

Figure 14. Script to control the concurrent overtaking experiment with 3 lanes

When step 2 is active, the system waits until one of the two transitions becomes true. If vehicle A decides to overtake B using the left lane, the condition ODist(A,B)1m evaluates to true and step 4 will be activated.

7.

Conclusions and Future Work

Most driving simulation experiments in which the main subject of study is the human behavior require both realistic scenarios and precisely controlled events. The preparation of such environments is one of the most time and effort consuming task in the scientific use of driving simulators. The possibility to include autonomous elements that react to each other using behavior can dramatically simplify the creation of realistic scenarios but it is usually not enough because strictly autonomous elements do not help in the construction of precisely controlled events. In order to make goal situations happen, some externally controlled elements must be used. This paper presented some of the work we made to allow a friendly control of a large number of autonomous agents in an interactive real-time system. Our work has been specially oriented to the control of autonomous vehicles and pedestrians in the preparation of scenarios for driving simulation experiments in DriS – Driving Simulator. We presented a scripting technique of autonomous elements that allows a friendly control of the autonomous elements that compose the scenario. Because every element is intrinsically autonomous, only few of them are usually addressed to implement the desired study event. Our scripting language is based in Grafcet, a well known graphical language used in the specification and programming of industrial controllers. Our technique allows imposing both short time orders and long time goals to each autonomous element. Orders can be triggered reactively using sensors that monitor the state of virtual traffic and configurable timers that generate all the necessary fixed and variable time events. Future work will be focused in the addition of online casting and in the development of debugging tools providing online visual feedback on the evolution of the Grafcet diagram.

8.

References

[1] Salvador Bayarri; Fernandez M.; Martinez M., Virtual Reality For Driving Simulation, Communications of the ACM, May 1996 [2] Bruce M. Blumberg; Tinsley A. Galyean; Multi-Level Direction of Autonomous Creatures for RealTime Virtual Environments; ACM Computer Graphics (Siggraph’95 Proceedings), 30(3), pp. 97101, 1995 [3] Ken Perlin; Athomas Goldberg; Improv: A System for Scripting Interactive Actors in Virtual Worlds; Media Research Laboratory, Department of Computer Science, New York University [4] P. Maes; T. Darrell; B. Blumberg; The Alive System: Full Body Interaction with Autonomous Agents; Proceedings of Computer Animation’95 Conference, Switzerland, April 1995, IEEE Press, pp. 11-18.

[5] Armin Bruderlin; Lance Williams; Motion Signal Processing; ACM Computer Graphics (Siggraph’95 Proceedings), 30(3), pp. 97-101, 1995 [6] Xiaoyuan Tu; Demetri Terzopoulos; Artificial Fishes: Physics, Locomotion, Perception, Behavior; ACM Computer Graphics (Siggraph’94 Proceedings), pp. 43-50, 1994 [7] J. Hodgins; W. Wooten; D. Brogan; A. O’Brien, Animating Human Athletics; ACM Computer Graphics (Siggraph’95 Proceedings), 30(3), pp. 71-78, 1995 [8] Armin Bruderlin; Thomas W. Calvert; Dynamic Animation of Human Walking; ACM Computer Graphics (Siggraph’89 Proceedings), 23(3), pp. 233-242, 1989 [9] Mychael McKenna; David Zeltzer; Dynamic Simulation of Autonomous Legged Locomotion. ACM Computer Graphics (Siggraph’90 Proceedings), 24(4), pp. 29-38, 1990 [10] James Cremer; Joseph Kearney; Peter Willemsen; Directable Behavior Models for Virtual Driving Scenarios; Transactions of the Society of Computer Simulation International, 14(2) pp. 87-96, 1997. [11] Rahul Sukthanker; Situation Awareness for Tactical Driving; Ph.D. Thesis, Robotics Institute, Carnegie Mellon University, Pittsburgh, 1997 [12] J. Miguel Leitão; A. Coelho; F. N. Ferreira; DriS – A Virtual Driving Simulator; Proceedings of the Second International Seminar on Human Factors in Road Traffic, ISBN 972-8098-25-1, Braga, Portugal, 1997 [13] M. Girard; A. Maciejwski; Computational Modeling for the Computer–Animation of Legged Figures; Computer Graphics (Siggraph’85 Proceedings), 20(3), pp. 263-270, 1985 [14] Dieter Koller; Tuan Luong; Jitendra Malik; Binocular Stereopsis and Lane Marker Flow for Vehicle Navigation: Lateral and Longitudinal Control; Report No. UCB/CSD 94-804, University of California, CS division, Berkeley, 1994 [15] Michael J. Zyda; D. R. Pratt; J. S. Falby; C. Lombardo; K. Kelleher; The Software Required for the Computer Generation of Virtual Environments; Presence, Vol. 2, No. 2, 1994 [16] Srikanth Bandi; Daniel Thalmann, Space Discretization for efficient human navigation, Eurographic's 98, Lisbon, Portugal, September 1998 [17] Olivier Alloyer, Esmail Bonakdarian; James Cremer; Joseph Kearney; Peter Willemsen, Embedding Scenarios in Ambient Traffic, Proceedings of DCS'97 (Driving Simulation Conference), Lyon, France, September 1997 [18] J. Miguel Leitão; A. Augusto Sousa; Carlos Rodrigues; Jorge A. Santos; F. Nunes Ferreira; A. H. Pires da Costa, Realização de Experiências num Simulador de Condução, Psicologia del Tráfico y la Seguridad Vial / II Congresso Iberoamericano de Psicología, Madrid, July 1998 [19] Paulo Noriega; Jorge Santos; Carlos Rodrigues; Pedro Albuquerque, Vehicle’s Motion Detection: Interaction with three Kinds of Road Pavement, Proceedings of the Second International Seminar on Human Factors in Road Traffic, ISBN 972-8098-25-1, Braga, Portugal, 1997 [20] J. Miguel Leitão; F. Nunes Ferreira, Agentes Autónomos em Ambientes Artificiais, Proceedings of the 8º Encontro Português de Computação Gráfica, Coimbra, Portugal, 1998 [21] André Simon, Automates Programables - Programation, Automatisme & Logique Programmée. Éditions L'ÉLAN. [22] M. Blanchard, Comprendre maîtriser et appliquer LE GRAFCET, Cépadues éditions. [23] F. Degoulange, R. Lamaitre, D. Perrin, AUTOMATISMES - Grafcet, Composants, Fonctions Logiques, Schémas. Dunod. [24] Sylvain Thelliez, Jean Marc Toulotte, Applications Industrielles du GRAFCET, Eyrolles [25] Yiannis Papelis, Graphical Authoring of Complex Scenarios Using High Level Coordinators, Workshop on Scenario and Traffic Generation for Driving Simulation, Orlando, 1996 [26] P. van Wolffelaar; W. van Winsum, Traffic modelling and driving simulation - an integrated approach, Proceedings of DCS'95, (Driving Simulation Conference), France, September 1995

Suggest Documents