Real Time Robot Navigation with a Smart Transducer Network

DIPLOMARBEIT Real Time Robot Navigation with a Smart Transducer Network ausgefu ¨hrt am Institut fu ¨r Technische Informatik der Technischen Univers...
Author: Lucas Haynes
0 downloads 2 Views 3MB Size
DIPLOMARBEIT

Real Time Robot Navigation with a Smart Transducer Network

ausgefu ¨hrt am Institut fu ¨r Technische Informatik der Technischen Universit¨at Wien

unter der Anleitung von O.Univ.Prof. Dr. Hermann Kopetz und Univ.Ass. Dipl.-Ing. Wilfried Elmenreich als verantwortlich mitwirkendem Universit¨atsassistenten

durch

Lukas Schneider Margaretenstrasse 39/14, A-1040 Wien Matr.Nr. 9625739

Wien, im Mai 2001

Abstract This thesis describes key issues in the design of an autonomous mobile robot. Different kinds of sensors and actuators are instrumented with lowcost microcontrollers to operate as smart transducer nodes. These nodes are responsible for observing the mobile robot’s environment, computing a safe passage through this environment, and controlling the driving direction and speed. For communication among the nodes of the smart transducer network the time-triggered protocol TTP/A is used. This master-slave communication protocol, developed at the University of Technology in Vienna, is very well suited for this task since it is especially intended for the integration of smart transducers in distributed real-time control systems. It features an interface file system (IFS) that acts as the source and sink of the communicated data, and also serves as an interface to the application. The exploration of the mobile robot’s surroundings and the computation of a safe passage is accomplished by implementing a three-level architectural model. A suite of infrared and ultrasonic sensors scans the mobile robot’s environment at the node level. Utilizing TTP/A’s interface file system, this real-time data is presented to the cluster level where multi-sensor data fusion is applied. This produces an image of the mobile robot’s environment which represents the interface to the application level. At this level the application of a special navigation algorithm enables the mobile robot to calculate a path through the robot’s surroundings. Through TTP/A’s smart transducer interface these driving directions are given to the actuators at the node level which finally allows the mobile robot to avoid obstacles in its way. The special navigation algorithm implemented in this thesis divides the mobile robot’s environment into disjoint polar sectors. For each sector the polar obstacle density is calculated and the sector with the lowest density is chosen as driving direction.

i

Zusammenfassung Diese Diplomarbeit beschreibt die wichtigsten Aufgaben f¨ ur die Entwicklung eines autonomen mobilen Roboters. Verschiedene Sensor- und Aktuatortypen arbeiten durch die Ansteuerung mit billigen Microcontrollern als Smart Transducer Knoten. Diese Knoten sind f¨ ur die Beobachtung des Umfeldes des mobilen Roboters, die Berechnung einer sicheren Passage durch dieses Umfeld und die Kontrolle von Fahrtrichtung und -geschwindigkeit verantwortlich. F¨ ur die Kommunikation zwischen den Knoten des Smart Transducer Netzwerks wird das zeitgesteuerten Protokoll TTP/A verwendet. Dieses Protokoll basiert auf Master-Slave Kommunikation und wurde an der Technischen Universit¨at Wien entwickelt. Es ist f¨ ur diese Aufgabe besonders geeignet da es speziell f¨ ur die Integration von Smart Transducer in verteilten Echtzeit-Steuerungssystemen bestimmt ist. Es zeichnet sich unter anderem durch das sogenannte Interface File System (IFS) aus, das als Quelle und Senke der kommunizierten Daten dient, aber auch eine Schnittstelle zur Applikation darstellt. Der mobile Roboter erkundet seine Umgebung und berechnet eine sichere Durchfahrt. Dies wird durch die Anwendung eines Architekturmodells erreicht, das aus drei Ebenen besteht. Infrarot und Ultraschall Sensoren tasten den Nahbereich des Sensors auf der untersten Ebene, dem Node Level, ab. Durch die Nutzung von TTP/A’s Interface File System werden diese Echtzeit-Daten an die mittlere Ebene, den Cluster Level, weitergegeben, wo Multi-Sensor Data Fusion angewendet wird. Dies ergibt ein Abbild der Umgebung des mobilen Roboters, welches die Schnittstelle zur h¨ochsten Ebene des Architekturmodells, dem Application Level, darstellt. Auf dieser Ebene erm¨oglicht die Anwendung eines speziellen Navigationsalgorithmus die Berechnung eines Pfades durch das Umfeld des Roboters. Durch die abermalige Nutzung von TTP/A’s Interface File System werden die Fahrtanweisungen den Aktuatoren auf dem Node Level mitgeteilt. Dies erlaubt dem mobilen Roboter schliesslich Hindernissen auszuweichen. Der spezielle Navigationsalgorithmus, der in dieser Diplomarbeit verwendet wird, unterteilt die Umgebung des Roboters in disjunkte Polarsektoren. Die Dichte von Hindernissen in jedem dieser Sektoren wird berechnet und der Sektor mit der geringsten Dichte wird als Fahrtrichtung gew¨ahlt.

ii

Contents 1 Introduction 1.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Concepts and Related Work 2.1 Real-Time Systems . . . . . . . . . . . . . . . . . . 2.1.1 Terminology . . . . . . . . . . . . . . . . . 2.1.2 Functional Requirements . . . . . . . . . . 2.1.3 Dependability Requirements . . . . . . . . . 2.1.4 Classification of Real-Time Systems . . . . 2.2 The TTP/A Protocol . . . . . . . . . . . . . . . . 2.2.1 Communication . . . . . . . . . . . . . . . . 2.2.2 Data Transmission . . . . . . . . . . . . . . 2.2.3 Types of Rounds . . . . . . . . . . . . . . . 2.2.4 The Interface File System . . . . . . . . . . 2.3 Smart Transducers . . . . . . . . . . . . . . . . . . 2.3.1 Benefits . . . . . . . . . . . . . . . . . . . . 2.3.2 Requirements . . . . . . . . . . . . . . . . . 2.3.3 Types of Interfaces . . . . . . . . . . . . . . 2.4 Sensor Fusion . . . . . . . . . . . . . . . . . . . . . 2.4.1 Benefits . . . . . . . . . . . . . . . . . . . . 2.4.2 Sensor Selection . . . . . . . . . . . . . . . 2.4.3 Fusion Methods . . . . . . . . . . . . . . . . 2.4.4 Applications . . . . . . . . . . . . . . . . . 2.5 Obstacle Avoidance Methods . . . . . . . . . . . . 2.5.1 Edge-Detection Methods . . . . . . . . . . . 2.5.2 The Certainty Grid . . . . . . . . . . . . . . 2.5.3 The Potential Field Method . . . . . . . . . 2.5.4 The Virtual Force Field (VFF) Method . . 2.5.5 The Vector Field Histogram (VFH) Method

iii

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . .

1 2 2 4 4 4 4 5 5 6 6 7 7 8 9 9 9 10 11 11 12 12 13 15 15 16 16 16 17

CONTENTS

CONTENTS

3 The Demonstrator 3.1 Introduction . . . . . . . . . . . . 3.2 Architectural Model . . . . . . . 3.2.1 Node Level . . . . . . . . 3.2.2 Cluster Level . . . . . . . 3.2.3 Application Level . . . . . 3.3 The Transducers . . . . . . . . . 3.3.1 The Nodes . . . . . . . . 3.3.2 The Transducer Hardware 3.3.3 The Software . . . . . . . 3.4 Demonstrator Layout . . . . . . 3.4.1 Power Supply . . . . . . . 3.4.2 Light Control System . . 3.5 Node Implementation . . . . . . 3.5.1 Master Implementation . 3.5.2 Slave Implementation . . 3.5.3 Timing Issues . . . . . . . 3.5.4 TTP/A Round Layout . . 3.6 Application . . . . . . . . . . . . 3.6.1 Data Acquisition . . . . . 3.6.2 Data Fusion . . . . . . . . 3.6.3 Navigation . . . . . . . . 3.7 Monitoring . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

18 18 19 20 21 21 21 21 23 31 39 40 41 41 41 45 48 50 51 51 55 58 60

4 Evaluation 4.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Programming Software . . . . . . . . . . . . . 4.1.2 Using TTP/A . . . . . . . . . . . . . . . . . . 4.1.3 Synchronization . . . . . . . . . . . . . . . . 4.1.4 Separation of Protocol- and Application-Code 4.1.5 Cluster Communication . . . . . . . . . . . . 4.1.6 Navigation Algorithm . . . . . . . . . . . . . 4.1.7 Driving Speed . . . . . . . . . . . . . . . . . . 4.1.8 Power Consumption . . . . . . . . . . . . . . 4.1.9 Node Development . . . . . . . . . . . . . . . 4.2 Encountered Problems . . . . . . . . . . . . . . . . . 4.2.1 The Infrared Sensors . . . . . . . . . . . . . . 4.2.2 Proper Grounding . . . . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

63 63 63 64 64 64 64 65 65 66 66 66 66 66

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

5 Conclusion 67 5.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 5.2 Outlook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

iv

Chapter 1

Introduction The era of robots has come. Robots are replacing humans in many aspects of life, and first in environments which are hazardous for a human or where a human cannot be physically present. Such environments include mine fields, highly polluted and radioactive environments, sea bottoms, surfaces of other planets, and other environments in space missions. In order to operate successfully in an unknown environment, a robot may need to learn a model of the environment. Robots however are not humans and they do not perceive the world as precisely as humans do. There are four main practical limitations on a robot’s ability to acquire accurate models of the world. First, sensors are never perfect. They have limited resolution, they provide data corrupted by noise, and above all, they provide only a limited amount of data. More expensive sensors are able to register data more precisely than cheaper ones. Nevertheless, there will always be an issue of time and costs involved in obtaining the data. Second, in most applications a robot should operate in real time. That is, in addition to hardware limitations imposed by the cost of the sensor, there is also a time constraint which does not allow a robot to obtain enough or more precise data. Third, the environment around a robot is complex and dynamic. This can result in contradictory sensor readings. Finally, robot’s motion is inaccurate due to the drift and slippage, which results in incorrectness in object location estimation. These odometric errors however can be reduced by limiting the mobility of the robot. The major challenge of the research comes from the desire to build adequate world models from inaccurate and incomplete range data, where the adequacy of the model is judged from its suitability to a given task. Another aspect comes from the desire to split a complex system, such as the software of a robot with distributed sensors and actuators, into small comprehensible subsystems. The goal is to build a composable system, where various well-

1

1. Introduction

1.1. Objectives

specified and tested subsystems are integrated into one large system while maintaining their functionality.

1.1

Objectives

The main objective of this thesis is to describe an approach how to lessen or even overcome the limitations on a robot’s ability to acquire an accurate model of the world as stated in the introduction. For this task a demonstrator for a maintainable real-time system based on the TTP/A protocol is developed. An autonomous mobile robot was chosen as demonstrator. The robot’s task is to perceive its environment by means of distance sensors, and avoiding obstacles in its way by applying a special navigation algorithm to an image of the environment. By the use of a total of five distance sensors the shortcomings of one single sensor concerning its limited spatial and temporal coverage and uncertainties about sensor readings are reduced. Multi-sensor data fusion is the key to creating a unified view of the mobile robot’s environment. The development of the demonstrator includes the design of TTP/A hardware and the implementation of the protocol as specified in [Kop01]. Designing the demonstrator’s hardware consists mainly of selecting appropriate sensor and actuator elements and combining them with TTP/A nodes. These nodes have been designed especially for this project. Because of limited space on the robot, size, weight and power consumption were of great importance. The implementation of the TTP/A protocol is twofold. On one hand, the master is set out to control the communication on the cluster. On the other hand the slaves have to control the connected transducers and communicate with other nodes on the cluster. These two tasks the slaves have to perform are separated into protocol and application code which has the benefit of reusing a verified protocol code for all slave nodes.

1.2

Structure

This thesis is structured as follows: Chapter 2 gives an introduction into basic concepts and related work. A short classification of real-time systems is followed by an introduction into TTP/A, which is a time-triggered master-slave communication protocol for fieldbus application that uses a time division multiple access (TDMS) bus arbitration scheme. The next section explains the benefits and requirements for smart transducers. A presentation of sensor fusion concepts follows and an introduction to a representative selection of obstacle avoidance methods closes this chapter. 2

1. Introduction

1.2. Structure

Chapter 3 presents the main task of the project: the demonstrator. An introduction to the mobile robot is followed by details about its architecture before the focus shifts to the design, construction, and implementation of the smart transducers. The next section presents implementation details on the TTP/A protocol in general and on the master and slave implementation in particular, and discusses timing issues and the final TTP/A round layout. Following an insight into the demonstrator’s application, i.e. data acquisition, data fusion and navigation, a short overview of a monitoring tool concludes the chapter. Chapter 4 presents the results of the work and discusses some problems encountered during the course of the project. The conclusion in chapter 5 summarizes the thesis and gives an outlook to possible future refinements of the sensor fusion algorithm.

3

Chapter 2

Concepts and Related Work 2.1

Real-Time Systems

A real-time (RT) computer system is a computer system in which the correctness of the system behaviour depends not only on the logical results of the computation, but also on the physical instant at which these results are produced [Kop97].

2.1.1

Terminology

A controlled object changes it state as a function of time. If time is stopped, the current state of the controlled object can be described by recording the values of its state variables at that moment. Normally not all state variables are of interest, but only a subset of state variables are significant for the purpose at hand. A significant state variable is called a real-time (RT) entity. Every RT entity is in the sphere of control of a subsystem, i.e. it belongs to a subsystem that has the authority to change the value of this RT entity. Outside its sphere of control, the value of an RT entity can be observed, but cannot be modified. An observation of an RT entity is represented by a real-time (RT) image in the computer system. Since the state of the controlled object is a function of real time, a given RT image is only temporally accurate for a limited interval, the accuracy interval.

2.1.2

Functional Requirements

The functional requirements of real-time systems are concerned with the functions that a real-time computer system must perform. Data Collection The first functional requirement of a real-time computer system is the observation of the RT entities in a controlled object and the collection of these observations. 4

2. Concepts and Related Work

2.1. Real-Time Systems

Direct Digital Control Many real-time computer systems must calculate the set points for the actuators and control the controlled object directly, i.e. without any underlying conventional control system. Man-Machine Interaction A real-time computer system must inform the operator of the current state of the controlled object, and must assist the operator in controlling the machine or plant object, which is accomplished via the man-machine interface.

2.1.3

Dependability Requirements

The following measures of dependability attributes are of importance: Reliability of a system is the probability that a system will provide the specified service until time t, given that the system was operational at t = t0 . Safety is reliability regarding critical failure modes. In a critical failure mode, the cost of a failure can be orders of magnitude higher than the utility of the system during normal operation. Maintainability is a measure of the time required to repair a system after the occurrence of a non-critical failure. Availability is a measure of the delivery of correct service with respect to the alternation of correct and incorrect service. Security is concerned with the ability of a system to prevent unauthorized access to information or services.

2.1.4

Classification of Real-Time Systems

Real-time systems can be classified from different perspectives: Hard Real-Time System vs. Soft Real-Time System A real-time system must react to stimuli from the controlled object within time intervals dictated by its environment. The instant at which a result must be produced is called a deadline. If a result has utility even after the deadline has passed, the deadline is classified as soft, otherwise it is firm. If a catastrophe could result if a firm deadline is missed, the deadline is called hard. A real-time computer system that must meet at least one hard deadline is called hard real-time computer system. If no hard deadline exists, then the system is called soft real-time computer system. Fail-Safe vs. Fail Operational A real-time system is fail-safe when a safe state can be identified and quickly reached after the occurrence of a 5

2. Concepts and Related Work

2.2. The TTP/A Protocol

failure. A real-time system is fail-operational when a safe state cannot be reached immediately after the occurrence of a failure. In such a case the system must provide a minimal level of service to avoid a catastrophe. Guaranteed Timeliness vs. Best Effort A real-time system is a guaranteed timeliness system if it is possible to reason about the adequacy of the design without reference to probabilistic arguments, provided the assumptions about the load- and fault-hypothesis hold. A realtime system is a best effort system if it is not possible to establish the temporal properties by analytical methods, even if the load- and fault-hypothesis holds. Resource Adequate vs. Resource Inadequate A real-time system is resource adequate if there are enough computing resources available to handle the specified peak load and the faults specified in the faulthypothesis. Event-Triggered vs. Time-Triggered A real-time system is event-triggered if all communication and processing activities are triggered by an event other than a clock tick. A real-time system is time-triggered if all communication and processing activities are initiated at predetermined points in time at an a priori designated tick of a clock.

2.2

The TTP/A Protocol

TTP/A is a time-triggered master-slave communication protocol for fieldbus applications that uses a time division multiple access (TDMA) bus arbitration scheme [Kop01]. It is possible to address up to 255 nodes on a bus where one single node is the active master. This master provides the time base for the slave nodes.

2.2.1

Communication

In TTP/A the communication is organized into rounds. Between any two rounds there is an inter round gap (IRG) of at least one slot time of bus silence. A round consists of one or more frames. A frame is a sequence of bytes transmitted by a single node. Bytes are separated by an inter byte gap (IBG). Each round is independent of any other round. Every round starts with a fireworks frame from the master that identifies the round. The arrival of the fireworks frame from the master is a synchronization event that starts a common time-base for this round in each node. The fireworks frame contains the round name (encoded in the fireworks byte) and can carry additional

6

2. Concepts and Related Work

2.2. The TTP/A Protocol

data. According to the specification of the selected round, the fireworks frame is followed by data frames of specified lengths from the specified nodes. Thus, the structure and duration of each round is static and specified a priori, i.e. it is common knowledge to all nodes of a cluster.

2.2.2

Data Transmission

A standard UART format with byte oriented data transmission is used: One start bit, 8 data bits, one parity bit, and one stop bit. The parity of all data bytes is even, except for fireworks bytes where the parity is odd. Each UART frame is followed by 2 bit of bus silence called the inter byte gap (IBG).

Figure 2.1: TTP/A Slot

2.2.3

Types of Rounds

There are four types of rounds specified for the TTP/A protocol. Synchronization Pattern The synchronization pattern serves two purposes; one one hand it is a regular fireworks byte and on the other hand it is used for start-up synchronization. Also slaves using a software UART can easily measure the duration of one bit cell. Multipartner Rounds A multipartner rounds consists of a configuration dependent number of slots and an assigned sender node for each slot. The configuration of a round is defined in the RODL (ROund Descriptor List). This list defines which node transmits in a certain slot, which node(s) receive(s) the data is this slot, as well as the semantics of the slot. The RODLs must be configured in the slave nodes prior to the execution of the corresponding multipartner round.

7

2. Concepts and Related Work

2.2. The TTP/A Protocol

Master/Slave Rounds A master/slave round establishes a connection between the master and a slave for reading/writing monitoring or configuration data, e.g. the RODL information. A master/slave round consists of two parts. In the master/slave request round the action and the data address is encoded into three parameter bytes and sent to the slave. In the master/slave data round the addressed data bytes are either transmitted from the master to the slave or from the slave to the master, just as specified in the master/slave request round. To distinguish between request and data round two different fireworks bytes exist which are sent by the master as first byte of each round. The last byte of each round contains a checksum byte that protects the communication from bus failures. Broadcast Rounds A Broadcast round is initiated by the master to send commands to all nodes of a cluster. At startup the master uses master/slave rounds to configure the connected nodes. The multipartner round is intended to establish a periodic, predictable and efficient real-time communication.

2.2.4

The Interface File System

Every TTP/A node contains its own local interface file system (IFS) that acts as the source and sink of the communicated data. The clusters’ IFS consists of the local interface file systems of all nodes in the considered cluster. This IFS also serves as an interface to the application. It is intended that all relevant data of a node, like round definitions, application specific parameters, and I/O properties, is mapped into the IFS. The IFS was introduced for two reasons: • To provide a consistent view of the transducer properties. • To decouple subsystems from the point of view of temporal control. All nodes contain several files that can be accessed via the TTP/A protocol in a unified manner but the minimal setup for a TTP/A node is: Round Descriptor List (RODL) File Each node contains at least one and at most six RODL files that contain TDMA schedules for the TTP/A multipartner rounds. TTP/A Configuration File This file contains at least an 8 bit alias which identifies a node during a master/slave round. Documentation File This file contains the node’s serial and series number and optionally the node’s data sheet or an uniform resource locator (URL) where the data sheet can be obtained. 8

2. Concepts and Related Work

2.3

2.3. Smart Transducers

Smart Transducers

A smart transducer is the combination of an analog or digital sensor or actuator element and a local microcontroller that contains the interface circuitry, a processor, memory and a network controller in a single unit [KEM00]. Frank [Fra00] defines a smart transducer or sensor as a device with a buildin intelligence, whether apparent to the user or not. The smart transducer transforms the raw sensor signal to a standardized digital representation, checks and calibrates the signal, and transmits this digital signal via a secure communication protocol to its users. More and more sensor elements are themselves microelectronics mechanical systems (MEMS) that can be integrated on the same silicon die as the associated microcontroller.

2.3.1

Benefits

The smart transducer technology offers a number of advantages from the points of view of technology, cost and complexity management: • Electrically weak non-linear sensor signals originating from an MEMS sensor can be conditioned, transformed into digital form, and calibrated on a single silicon die without any noise pickup from long external signal transmission lines [DW98]. • It is possible to locally monitor the operation of the sensing element and thus simplify the diagnostics at the system level. In some cases it is possible to build smart transducers that have a single simple external failure mode - fail-silent, i.e. the transducer operates correctly or does not operate at all. • The interface of the smart transducer to its environment is a wellspecified digital communication interface to a sensor bus, offering ”plugand-play” capability if the sensor contains a reference to its documentation on the internet. • The internal complexity of the smart transducer hardware and software and the internal transducer failure modes can be hidden from the user by well-designed fully specified smart transducer interfaces that provide just those services that the user is interested in. Thus a proper interface design enabled by the smart transducer technology can contribute to a reduction of the complexity at the system level.

2.3.2

Requirements

A smart transducer needs a much larger name space than a simple analog sensor. In addition to the actual measured values, the parameters for range selection, alarm limits, signal conditioning, and calibration must be set by 9

2. Concepts and Related Work

2.3. Smart Transducers

the user. Information about sensor performance and diagnostic information must be stored in the sensor and accessed during maintenance. Furthermore it must be possible to configure a ”generic” smart transducer into a new application context. Many instances of the same sensor type, e.g. a temperature sensor, may be used in differing roles within an application. In the emerging market of massively distributed embedded systems, these configurations should be performed on-line during system operation.

2.3.3

Types of Interfaces

From the point of view of complexity management and composability, it is useful to distinguish between three different types of interfaces of a smart sensor: the real-time-service (RS) interface, the diagnostic and maintenance (DM) interface, and the configuration planning (CP) interface. The Real-Time Service (RS) Interface The purpose of the RS interface is the timely exchange of ”observations” of real-time entities between the engaged subsystems. An real-time entity is a state variable of interest that has a name and a value. The user of the observations at the RS interface must only know about the meaning of these observations, but does not need any knowledge about the internal structure or operation of the smart sensor. If an observation is produced by a set of replicated sensors, the user is only interested in the availability of a timely observation, but not which one of the set of replicated sensor produced this observation. The unit of addressing is thus the name of the observation, but not the node that produced this observation. The Diagnostic and Maintenance (DM) Interface The DM interface opens a communication channel to the internals of a smart transducer for the purpose of diagnosis and maintenance. It is used for setting sensor parameters and for retrieving information about the internals of the transducer. The maintenance engineer that accesses the transducer internals via the DM interface must have detailed knowledge about the internal structure and behavior of the transducer. Ideally, the DM interface should be independent from the RS interface, since these two interfaces are directed towards two different user groups. There is a need to support on-line maintenance while a system is operational. To achieve this objective, the sporadic maintenance traffic must coexist with the time-critical real-time traffic without disturbing the latter. The traffic pattern across the DM interface is normally sporadic and not time-critical, although precise knowledge about the point in time when a particular value was observed is important. The Configuration Planning (CP) Interface The CP interface is used to connect a component to other components of a system. It is used 10

2. Concepts and Related Work

2.4. Sensor Fusion

during the integration or reconfiguration phase to generate the ”glue” between a generic sensor and the embedded application it is serving. The use of the CP interface does not require detailed knowledge about the internal operation of a sensor, but requires knowledge about the structure of the middleware. The CP interface is point-to-point and not time-critical.

2.4

Sensor Fusion

Data Fusion is the process of combining multiple data in order to produce information of tactical value to the user. Data can come from one or many sources. Sources may be similar, such as radars, or dissimilar, such as electro-optic, acoustic, or passive electronic emissions measurement. A key issue is the ability to deal with conflicting data, producing interim results that the algorithm can revise as more data becomes available.

2.4.1

Benefits

The purpose of external sensors is to provide a system with useful information concerning some features of interest in the system’s environment. The potential advantages in fusing information from multiple sensors are that the information can be obtained more accurately, concerning features that are impossible to perceive with individual sensors, in less time, and at a lesser cost [LK91]. Sensor fusion reduces a system’s complexity. In a traditionally designed system the sensor measurements are fed into the application which has to cope with a lot of imprecise, ambiguous, and incomplete data streams. In a system where sensor data is preprocessed by fusing methods, the input to the application is a much more complete data structure about the environment which reduces the complexity of the application [Elm00]. Further advantages are [BRG96, Gro98]: Robustness and Reliability The system is able to provide useful information even if part of the system fails. Extended Spatial and Temporal Coverage The sensors are looking in different directions at different times. Increased Confidence A measurement of one sensor is confirmed by measurements of other sensors covering the same domain. Reduced Ambiguity Joint information reduces the set of ambiguous interpretations of measured values.

11

2. Concepts and Related Work

2.4. Sensor Fusion

Robustness against Interference By measuring with different types of sensors (e.g. optical sensors and ultrasonic sensors), the system becomes less vulnerable against interference. Improved Resolution When more than one independent measurements are made, the resolution of the fused value is better than a single sensor’s measurement.

2.4.2

Sensor Selection

Sensor selection is one of the integration functions that can enable a multisensor system to select the most appropriate configuration of sensors (or sensing strategy) from among the sensors available to the system. Two different approaches to the selection of type, number, and configuration of sensors to be used in the system can be distinguished: preselection during design or initialization, and real-time selection in response to changing environmental or system conditions. Preselection: Beni et al. [BHHJ83] have derived a general relationship between the number and operating speed of available sensing elements as a function of their response and processing times. This relationship can be used to determine the optimal arrangement of the sensing elements in a multisensor system. In addition to the actual geometric arrangement of the sensing elements, consideration of the choice between adding sensing elements (static sensing) and moving the elements (dynamic sensing) is used in determining the optimal arrangement. Real-Time Selection: Hutchinson at al. [HCK86] have proposed an approach to planning sensing strategies for object recognition in a robotic workcell. One sensor is used to form an initial set of object hypotheses and then subsequent sensors are chosen so as to disambiguate maximally the remaining object hypotheses.

2.4.3

Fusion Methods

Different methods have been proposed for general multisensor fusion. Most methods of multisensor fusion make explicit assumptions concerning the nature of the sensory information. The most common assumptions include the use of a measurement model for each sensor that includes a statistically independent additive Gaussian error or noise term (i.e. location data) and an assumption of statistical independence between the error terms for each sensor. Many of the differences in the fusion methods included below center on their particular techniques (e.g. calibration, thresholding) for transforming raw sensory data into a form so that the above assumptions become reasonable and a mathematically tractable fusion method can result. 12

2. Concepts and Related Work

2.4. Sensor Fusion

Weighted Average One of the simplest and most intuitive general methods of fusion is to take a weighted average of redundant information provided by a group of sensors and use this as the fused value. Kalman Filter The Kalman filter [Kal60] is used in a multisensor systems when it is necessary to fuse dynamic low-level redundant data in real time (see [May82] for a general introduction). The filter uses the statistical characteristics of the measurement model to determine estimates recursively for the fused data that are optimal in a statistical sense. The recursive nature of the filter makes it appropriate for use in systems without large data storage capabilities. Bayesian Estimation using Consensus Sensors Luo and Lin [LLS88, LL88] have developed a method for the fusion of redundant information from multiple sensors. The central idea behind the method is first to eliminate from consideration the sensor information that is likely to be in error and then to use the information from the remaining ”consensus sensors” to calculate a fused value. The information from each sensor is represented as a probability density function. The optimal fusion of the information is determined by finding the Bayesian estimator that maximizes the likelihood function of the consensus sensors. Multi Bayesian Durrant-Whyte [DW87, DW88] has developed a model of a multisensor system that represents the task environment as a collection of uncertain geometric objects. Each sensor in the system is described by its ability to extract useful static descriptions of these objects. The sensors in the system are considered as a team of decisionmakers. Together, the sensors must determine a team-consensus view of the environment. Fuzzy Logic Huntsberger and Jayaramamurthy [HJ87] have used fuzzy logic to fuse information for scene analysis and object recognition. Fuzzy logic [Zad65], a type of multiple-valued logic, allows the uncertainty in multisensor fusion to be directly represented in the inference process by allowing each proposition, as well as the actual implication operator, to be assigned a real number from 0.0 to 1.0 to indicate a degree of truth.

2.4.4

Applications

An overview of the field of applications for sensor fusion is given in [LK91]. Mobile Robots The mobility of robots and other vehicles is required in a variety of applications. In simple well-structured environments, automatic control technology is sufficient to coordinate the use of the required sensor systems (e.g. automatic guided vehicles, missiles, . . . ) 13

2. Concepts and Related Work

2.4. Sensor Fusion

[Har86]. When a vehicle must operate in an uncertain or unknown dynamic environment - usually in close to real-time - it becomes necessary to consider fusing the data from a variety of different sensors so that an adequate amount of information from the environment can be quickly perceived. Because of these factors, mobile robot research has proved to be a major stimulus to the development of concrete approaches to multisensor fusion. Industrial The addition of sensory capabilities to the control of industrial robots can increase their flexibility and allow for their use in the production of products with a low volume or short design life [GEKW86]. In many industrial applications, the use of multiple sensors is required to provide the robot with sufficient sensory capabilities. Military As the complexity, speed, and scope of warfare has increased, the military has increasingly turned to automated systems to support many activities traditionally performed manually [FN80]. The use of large numbers of diverse sensors as part of these systems has resulted in the issues of multisensor fusion assuming critical importance. Space NASA’s permanently manned space station will be the United States’ major space program in coming years [KdFG87]. Previous NASA programs, including the space shuttle, have used a high degree of participation by both the crew and ground-based personnel to perform the sensing and perception functions required for many tasks. Both to increase productivity and allow tasks beyond the capability of the crew to be performed, the space station will make increasing use of autonomous systems for the servicing, maintenance, and repair of satellites and the assembly of large structures for use as production facilities and commercial laboratories. Probably the most important factor promoting the use of autonomous systems for these applications is the cost of having a human in space. In addition to these economic factors, aspects of the space environment can make the use of multiple sensors an especially important part of these systems. The sensing of objects in space using just optical sensors is difficult because the lack of atmosphere invalidates some of the common assumptions concerning surface reflectance used in many visual recognition methods; also, images in space frequently have deep shadows, missing edges, and intense specular reflections. Target Tracking A variety of different filtering techniques, together with radar, optical, and sonar sensors, have been used for tracking targets (e.g. missiles, planes, submarines, . . . ) in the air, space, and under water. Researchers have been developing methods of multitarget tracking that fuse the information (or measurements) provided by a number of 14

2. Concepts and Related Work

2.5. Obstacle Avoidance Methods

identical sensors. The key problem in multitarget tracking in general, and in multisensor multitarget tracking in particular, is ”data association” - the association of sensor information to a single target [CCBS86]. Remote Sensing In aerial photo mapping over land, known ground points can be used to establish the orientation of photographs; in aerial mapping over water, the orientation must be determined by accurately knowing the position and attitude of the camera at the time the photograph is taken because known ground points are not generally available. Gesing and Reid [GR83] describe a system that fuses information from multiple navigation sensors to estimate an aircraft’s position and attitude accurately enough for use in the mapping of shallow coastal water.

2.5

Obstacle Avoidance Methods

Obstacle avoidance is one of the key issues to successful applications of mobile robot systems. All mobile robots feature some kind of collision avoidance, ranging from primitive algorithms that detect an obstacle and stop the robot short of it in order to avoid a collision, through sophisticated algorithms, that enable the robot to detour obstacles. The latter algorithms are much more complex, since they involve not only the detection of an obstacle, but also some kind of quantitative measurements concerning the dimensions of the obstacle. Once these have been determined, the obstacle avoidance algorithm needs to steer the robot around the obstacle and proceed toward the original target. Usually, this procedure requires the robot to stop in front of the obstacle, take the measurements, and only then resume motion. Some relevant obstacle avoidance methods are summarized here.

2.5.1

Edge-Detection Methods

In this method [KB89], an algorithm tries to determine the position of the vertical edges of the obstacle and then steer the robot around either one of the ”visible” edges. The line connecting two visible edges is considered to represent one of the boundaries of the obstacle. In another edge-detection approach [Elf87], the robot remains stationary while taking a panoramic scan of its environment. After the application of certain line-fitting algorithms, an edge-based global path planner is instituted to plan the robot’s subsequent path. A common drawback of both edge-detection approaches is their sensitivity to sensor accuracy.

15

2. Concepts and Related Work

2.5.2

2.5. Obstacle Avoidance Methods

The Certainty Grid

A method for probabilistic representation of obstacles in a grid-type world model has been developed at Carnegie-Mellon University [Mor88]. This world model is especially suited to the accommodation of inaccurate sensor data such as range measurements from ultrasonic sensors. In the certainty grid the robot’s work area is represented by a twodimensional array of square elements, denoted as cells. Each cell contains a certainty value that indicates the measure of confidence that an obstacle exists within the cell area. Certainty values are updated by a probability function that takes into account the characteristics of a given sensor.

2.5.3

The Potential Field Method

The idea of imaginary forces acting on a robot has been suggested by Khatib [Kha85]. In this method, obstacles exert repulsive forces, while the target applies an attractive force to the robot. A resultant force vector R, comprising the sum of a target-directed attractive force and repulsive forces from obstacles, is calculated for a given robot position. With R as the accelerating force acting on the robot, the robot’s new position for a given time interval is calculated, and the algorithm is repeated.

2.5.4

The Virtual Force Field (VFF) Method

The VFF method proposed by Borenstein and Koren [BK91] uses a twodimensional Cartesian histogram grid for obstacle representation. Each cell in the histogram grid holds a certainty value that represents the confidence of the algorithm in the existence of an obstacle at that location. The histogram grid differs from the certainty grid in the way it is built and updated. Only one cell in the histogram grid is incremented for each range reading, creating a ”probability” distribution with only small computational overhead. While this approach may seem to be an oversimplification, a probabilistic distribution is actually obtained by continuously and rapidly sampling each sensor while the vehicle is moving. Thus, the same cell and its neighboring cells are repeatedly incremented. This results in a histogramic probability distribution, in which high certainty values are obtained in cells close to the actual location of the obstacle. Next, the potential field idea is applied to the histogram grid, so the probabilistic sensor information can be used efficiently to control the vehicle. Each cell exerts a virtual repulsive force towards the robot. The magnitude of this force is proportional to the certainty value of the cell and inversely proportional to the distance between the cell and the center of the vehicle.

16

2. Concepts and Related Work

2.5.5

2.5. Obstacle Avoidance Methods

The Vector Field Histogram (VFH) Method

The VFH method was developed to remedy the shortcomings of the VFF [BK91]. It employs a two-stage data reduction technique. Thus three levels of data representation exist: • The highest level holds the detailed description of the robot’s environment. In this level, the two-dimensional Cartesian histogram grid is continuously updated in real-time with range data sampled by the on-board range sensors. • At the intermediate level, a one-dimensional polar histogram is constructed around the robot’s momentary location. • The lowest level of data representation is the output of the VFH algorithm: the reference values for the drive and steer controllers of the vehicle. Creation of the Polar Histogram The contents of each cell in the histogram grid are now treated as an obstacle vector. The polar histogram comprises n angular sectors of width α where each cell of the histogram grid is related to a certain sector. For each sector the polar obstacle density hk is calculated by X hk = mi,j i,j

where mi,j is the magnitude of the obstacle vector at cell (i, j). Steering Control The resulting polar histogram consists of ”peaks”, i.e. sectors with high polar obstacle densities (POD), and ”valleys”, i.e. sectors with low POD. Any valley comprised of sectors with PODs below a certain threshold is called a candidate valley. Usually there are two or more candidate valleys and the VFH algorithm selects the one that most closely matches the direction to the target. Once a valley is selected, it is further necessary to choose a suitable sector within that valley. This is done by determining the near border (the sector that is nearest to the target and below the threshold) and the far border (the other end of the valley). The desired steering direction is then defined as the sector in the middle between the near and the far border.

17

Chapter 3

The Demonstrator 3.1

Introduction

To demonstrate the real-time capabilities of the TTP/A protocol a demonstrator was designed during the course of this thesis. The demonstrator is in fact a mobile robot (”smart car”) which main task is obstacle avoidance. Therefore it is equipped with a pair of ultrasonic sensors, three infrared sensors, an electric drive, and a steering unit. The ultrasonic sensors detect obstacles at a range of up to 5 meters and the infrared sensors are designed for measuring distances of 10 cm to 80 cm, which enables them to scan the vehicle’s close proximity. Additionally a ambient light sensor and four light emitter nodes are part of the cluster. Distance sensors, light sensor, servo motors for sensor pivoting and steering, driving unit, and light emitters are all separate TTP/A nodes. Each node is implemented on a low-cost microcontroller and equipped with a smart transducer interface [KHE00]. The network also contains a master node and a data processing node responsible for the mobile robot’s navigation. The ultrasonic sensors look straight ahead and their returned data indicates the amount of empty space directly ahead of the smart car. The infrared sensors are swayed around by the servos to be able to cover the area in directly front of the vehicle. The sensors generate values that correspond to the distance of the object they are aimed at. The data stream provided by the distance sensors is taken over by the navigation node that fuses the perceptions from the distance sensors with a model of the robot’s environment. In this model the positions of obstacles are stored and assigned with a probability value that converges towards a uncertain value as the mobile robot moves through its environment. Figure 3.1 depicts the scanning range of the infrared sensors and the fusing of sensor data into a sensor grid [BK91]. Based on this grid the control application makes decisions about direction and speed of further movement.

18

3. The Demonstrator

3.2. Architectural Model

Figure 3.1: The Smart Car.

The light emitter nodes control three LEDs each based on the data received from the ambient light sensor and the navigation node. Headlights, breaking light and direction indicator make up for the impression of a real car. 16 slave nodes and one master node combine for the TTP/A cluster on the mobile robot. The navigation program and the sensor fusion program are hosted on a single node, so that the real world image has not to be transmitted over the network. Thus the necessary bus speed was kept low at approx 20 KBit/sec making it possible to use a low-cost single-wire bus.

3.2

Architectural Model

The demonstrator’s architecture implements a three level design approach [EP01a, EP01b]. For the TTP/A protocol the idea of a two-level design approach has already been proposed in [PAG+ 00]. In this thesis the system is partitioned into three levels. The first level is the node level and consists of nodes with transducers equipped with a smart transducer interface. The second level is the cluster level that integrates the transducer nodes into a network, performs sensor fusion and presents the results to the application level. At the third level,

19

3. The Demonstrator

3.2. Architectural Model

the application level, decisions about control values are made based on the data provided by the cluster level. The motivation for the implementation of these levels is a reduction of system complexity at cluster and application level, and the possibility of software reuse. Furthermore it is justified by different functions each level has to fulfill and different data types that are used in each level. Well-defined interfaces negotiate between the levels. A smart transducer interface handles data flow from node to cluster level. The cluster level provides its results to the application level in the form of a image of the environment. The standardization of the smart transducer interface leads to benefits in the area of configuration, diagnosis and maintenance.

Figure 3.2: Control Loop in the Three-Level Model

Figure 3.2 shows the control loop modeled with this three-level design approach for the application at hand. The mobile robot’s environment is scanned by the sensors and the data is exchanged via the interface file system (IFS). Sensor fusion is performed at the cluster level and the resulting image of the environment is provided to the application. The navigation algorithm determines the optimal driving direction and this information is presented to the actuators via the IFS.

3.2.1

Node Level

Each sensor can be seen as a window showing a small part of the environment. It is the task of the node level to provide each window’s view. The node level interacts physically with the environment by performing measurements with sensors and actions with actuators. To support maximum modularity, the nodes are built as smart transducers. The smart sensor transforms the raw sensor signal to a standardized digital representation, checks and calibrates the signal, and provides this representation by means specified by the TTP/A protocol to the next level.

20

3. The Demonstrator

3.2.2

3.3. The Transducers

Cluster Level

The cluster level contains the hardware and software that act as a glue between transducers and application. It integrates the measurements from the sensors into a unified view of the environment into the image of the environment. This image is made tolerant to incomplete or missing measurements by implementing sensor fusion algorithms. The properties of this image, e.g. content, resolution, update timing, and data structure, depend on the application at the next level. A change in the node configuration results in an adoption of the cluster level software. The cluster level contains software elements like communication schedules and fusion algorithms and hardware like bus lines with system nodes controlling the communication and performing fusion.

3.2.3

Application Level

When implementing the application, the programmer’s interest focuses on the environment image, while the responsibility for its temporal accuracy and correctness lies with the implementation of the sensor fusion in the cluster level. The application level contains the intelligence to make decisions based on the environment image. Because the application does not directly rely on the sensor measurements, the same application program can be used with different sensor configurations.

3.3

The Transducers

A smart transducer is the combination of a sensor or actuator element and a local microcontroller that contains the interface circuitry, a processing element, memory and a network controller in a single unit [KEM00]. This section describes which microcontrollers are used, and which sensor and actuator components are utilized to combine for a smart transducer.

3.3.1

The Nodes

The nodes were designed and produced especially for this project. The main focus while planning the nodes was lying on • the use of a low-cost microcontroller, • a transmission speed of 19200 baud, • a computation speed of approx. 8 MHz, • the node’s size.

21

3. The Demonstrator

3.3. The Transducers

For the design of the nodes the EAGLE Layout Editor was used and proofed to be very useful. For production of the nodes the finished board layouts were sent to Beta Layout, a company specialized on manufacturing PCBs. Master Node The master node consists of an Atmel AT90S8515 microcontroller with a 32K Bytes external RAM and is clocked by a 7.3728 MHz quartz which allows for 0% UART errors at a buad rate of 19200. The Microcontroller The Atmel AT90S8515 microcontroller is a lowpower CMOS 8-bit microcontroller based on the AVR RISC architecture. It features 8K bytes of in-system programmable flash, 512 bytes of SRAM and 512 bytes of in-system programmable EEPROM. It also has one 8-bit and one 16-bit timer/counter with separate prescaler and a programmable serial UART. The Bus Driver A Motorola MC33290 ISO K Line Serial Link Interface is used as interface between the microcontroller and the bus. The External RAM The use of external RAM is supported directly by the microcontroller. All that is to do is connect an address latch and the RAM block as show in figure 3.3.

Figure 3.3: External SRAM connected to the AVR.

The Monitoring Interface For the implementation of the monitoring tool [Pet01], an additional gateway, implemented on an Atmel AT90S2313 microcontroller, and a MAX232 chip are placed on the node. Thus the 22

3. The Demonstrator

3.3. The Transducers

master’s file system can be monitored by connecting the node via a RS232 serial link to a PC running the appropriate application. Slave Nodes The slave node consists of an Atmel AT90S4433 microcontroller clocked by a 7.3728 MHz quartz which allows for 0% UART errors at a buad rate of 19200. Two I/O ports with eight pins each are accessible as well as two interrupt pins, a ground, and a current pin. The Microcontroller The Atmel AT90S4433 microcontroller is a lowpower CMOS 8-bit microcontroller based on the AVR RISC architecture. It features 4K bytes of in-system programmable flash, 128 bytes of SRAM and 128 bytes of in-system programmable EEPROM. It also has one 8-bit and one 16-bit timer/counter with separate prescaler and a programmable serial UART. The Bus Driver A Motorola MC33290 ISO K Line Serial Link Interface is used as interface between the microcontroller and the bus.

3.3.2

The Transducer Hardware

The demonstrator is built completely of common off the shelf components. The vehicle itself is a radio-controlled (RC) off-road car, purchased at a local modeling store, where the plastic body was replaced by a wooden board to accommodate all necessary parts. Infrared Sensor The Sharp GP2D02 infrared distance measuring sensor is one of the least expensive distance sensors available. Its output indicates the distance of an object 10–80 cm in front of the sensor. It is designed to interface to small microcontrollers and is capable of taking measurements in varying light conditions and against a wide variety of surfaces. Sensor Distance Measurement The distance measuring technique employed by the GP2D02 is triangulation. In this process, light is emitted from the sensor, reflected off the target, and received again by the sensor. There are three points involved, the emitter, the reflection point, and the receiver. The points form a triangle, hence the name triangulation. The distance information is extracted by measuring the angle at which the light returns to the sensor. Because the distance between the emitter and receiver is known, the target distance can be calculated from the received angle. If the angle is large, then the target is close, because the triangle is

23

3. The Demonstrator

3.3. The Transducers

wide. If the angle is small, then the target is far away, because the triangle is long and slim (see Figure ...). The output of the GP2D02 is proportional to the angle. The actual distance in feet or meters must be calculated by the host microcontroller. Data Quality Testing the sensor showed that the main limitations of the GP2D02 are manufacturing variation, low code vs. distance slope for larger distances, high free space output code values, and output fluctuation. The manufacturing variation among three samples was as high as 17 code points. Beyond 45 centimeters, the slope of the curve is so low that it is difficult to obtain a precise distance value from the output code (see Figure 3.4). The free space code ranged from 30 to 90 and fluctuated rapidly. The code given for certain targets (such as fabric and black cardboard) when the target was farther away than 50 cm was also fluctuating. The magnitude of the fluctuation was about ± 5 for a target, and ± 10 for free space.

Figure 3.4: Distance measuring output vs. Distance to reflective object.

24

3. The Demonstrator

3.3. The Transducers

Considering all of the sources of error, it seems that it would not be feasible to use the GP2D02 as an extremely precise distance measurement sensor. It is, however, an excellent choice for making relative distance measurements, and for measurements with an error tolerance of a centimeter or two. Electrical Interface The GP2D02 has two electrical requirements: a 5 volt power supply, and a suitable clock signal. The GP2D02 has a small 4-pin connector for power, ground, and signals. The GP2D02 datasheet states that the supply voltage must remain within the limits of 4.4 V to 7 V for proper operation. When the sensor is in the process of measuring or sending data, it draws an average current of about 18mA. This current drain actually consists mainly of a series of 32 pulses at 270mA. These pulses are on for 100µs and off for 800µs. Presumably, these are due to the internal infrared LED being pulsed. The current consumption is independent of external conditions (lighting, presence or absence of a target, etc.). While the sensor is idle, it draws only about 1A. The second interfacing requirement is the clock. The clock line (Vin ) is intended to be driven by an open-collector (or open-drain) type output. To ensure that it is pulled down reliably, one should sink at least 100µA from it.

Figure 3.5: Timing diagram for the GP2D02.

A timing diagram of the process of initiating a measurement cycle and reading out the data is shown in Figure 3.5. To place the sensor in idle mode, Vin must be placed high. If Vin is high for greater than 1.5ms, the sensor will reset and go into idle mode. As shown in the timing diagram, lowering Vin initiates the measurement cycle. After the delay needed by the sensor to take the reading, the sensor lowers Vout , signaling that it is ready to clock out the data. Vin can then be toggled high and low. The output data is clocked out MSB first, and is valid shortly after the falling clock edge. When the cycle is finished, Vin should be raised to high again for at least the minimum inter-measurement delay.

25

3. The Demonstrator

3.3. The Transducers

Physical Mounting Issues The case of the sensor is a conductive plastic material. To insure reliable, noise free measurements, the case must be grounded. If the case is not grounded and there is no target, or the return signal is weak, the sensor sometimes ”thinks” there is a target and that it is wandering around. Connecting the Sensor The sensor is connected to the microcontroller as shown in figure 3.6.

Figure 3.6: Connection diagram for the GP2D02.

Ultrasonic Sensor The basic principal of operation for sonar ranging is the same no matter what system is being used. The sensing is initiated by first creating a sonic ping at a specific frequency. In the case of the Polaroid module, the ping is roughly 16 high-to-low transitions between +200V and -200V. These transitions are fed to the transducer at around 50 kHz. For reference, the human ear can hear sounds in roughly the 20 Hz to 20 kHz range. As this chirp falls well out of the range of human hearing, the ping is not audible. Sometimes you can hear the transducer click as the chirp is sent. The chirp moves radially away from the transducer through the air at 26

3. The Demonstrator

3.3. The Transducers

approximately 343.2 m/s, the speed of sound. This speed is only slightly affected by humidity and virtually not affected at all by pressure and therefore is almost independent of altitude. Since the chirp is spreading out radially, the signal strength as the chirp moves farther from the transducer is reduced by 1/d2 . This means that the maximum measuring distance drops off rapidly at the extreme maximum of the sensor. When the chirp reaches an object, it is reflected in varying degrees dependent on the shape, orientation, and surface properties of the reflecting surface. The Polaroid ranging system is capable of detecting amazingly small obstacles such as a flower stem at several meters. This reflected chirp then travels back towards the transducer, again at the speed of sound. The transducer is especially sensitive to noises around 50 kHz like the chirp. As the reflected signal hits the transducer, a voltage is created which is fed to a stepped-gain amplifier. Since the signal decreases in strength with distance at an inverse squared proportion, the gain of the amplifier is increased exponentially (∼ d2 ). This helps give the best sensitivity across the range of the detector which is roughly 1m to 12m. Once the ranging module ”sees” enough cycles of the reflected signal, it changes it’s ECHO output to reflect the received reflected signal or echo. All that is left to do is to measure the time from the initiation of the ping to the received echo. This time corresponds directly to the distance traveled by the ping. Sensor Timing The ultrasonic sensor is started up by applying a positive voltage at the sensor’s VCC pin (see Figure 3.7). Once the sensor is activated, the sonic pings needed for distance measurements can be initiated by applying a positive voltage at the INIT pin. Once the sensor receives the reflected signal, the ECHO pin is set high by the sensor and this change is detected by the ultrasonic sensor node. The node then knows the time difference between sending and receiving the ping and can calculate the appropriate distance by using the following formula: timeecho − timeinit distance = ∗ 338 2 The maximum distance to be measured is bounded by the length of the INIT signal being high. Connecting the Sensor The ultrasonic sensor is connected to the microcontroller as shown in figure 3.8. Servo To move the infrared sensors through their sensor field some mechanism was needed. Stepper motors proved to be useless, since they needed an 27

3. The Demonstrator

3.3. The Transducers

Figure 3.7: Timing diagram for the ultrasonic sensor.

additional controller. So we chose to use standard RC (radio controlled) car servos. They are cheap, lightweight, hard wearing, come in many sizes, and have a standard interface. The Servo Interface Servos have a three-wire interface: ground, power, and control. The input to the control line is a pulse-code modulated signal from which all servo timings and positions are derived. All servos have their own limitations concerning the ability to perform a clockwise resp. counterclockwise turn. Convention states that applying a 1.5-ms signal holds the servo in neutral, a 1ms signal turns it counter-clockwise to its maximum angle, and a 2ms signal results in a clockwise turn (see Figure 3.9). The signal must be repeated no less than every 30–50ms or the servo may start jittering and finally stop driving the output. The result would be the loss of active hold on the desired position. Connecting the servo The servo’s control line is connected directly to the microcontrollers output port. The power and ground wires are connected to the main line. Light Sensor The TSL230 programmable light-to-frequency converter combines a configurable silicon photodiode and a current-to-frequency converter on single monolithic CMOS integrated circuit. The output can be either a pulse train or a square wave (50 % duty cycle) with frequency directly proportional to light intensity. The sensitivity of the device is selectable in three ranges, providing two decades of adjustment. The full-scale output frequency can be scaled by one of four preset values. All inputs and the output are TTL compatible, allowing direct two-way communication with a microcontroller for programming and output interface.

28

3. The Demonstrator

3.3. The Transducers

Figure 3.8: Connection diagram for the ultrasonic sensor.

Sensitivity Adjustment Sensitivity is controlled by two logic inputs, S0 and S1. Sensitivity is adjusted using an electronic iris technique - effectively an aperture control - to change the response of the device to a given amount of light. The sensitivity can be set to one of three levels: 1x, 10x or 100x, providing two decades of adjustment. This allows the responsivity of the device to be optimized to a given light level while preserving the full-scale output-frequency range. Changing of sensitivity also changes the effective photodiode area by the same factor. Output-Frequency Scaling Output-frequency scaling is controlled by two logic inputs, S2 and S3. Scaling is accomplished on chip by internally connecting the pulse-train output of the converter to a series of frequency dividers. Divided outputs available are divide-by 2, 10, 100, and 1 (no division). Divided outputs are 50 percent-duty-cycle square waves while the direct output (divide-by 1) is a fixed-pulse-width pulse train. Because division of the output frequency is accomplished by counting pulses of the principal (divide-by 1) frequency, the final-output period represents an average of n (where n is 2, 10 or 100) periods of the principal frequency. The output-scaling-counter registers are cleared upon the next pulse of the principal frequency after any transition of the S0, S1, S2, S3, or OE lines. The output goes high upon the next subsequent pulse of the principal frequency, 29

3. The Demonstrator

3.3. The Transducers

Figure 3.9: The servo control signal.

beginning a new valid period. This minimizes the time delay between a change on the input lines and the resulting new output period in the divided output modes. In contrast with the sensitivity adjust, use of the divided outputs lowers both the full-scale frequency and the dark frequency by the selected scale factor. The frequency-scaling function allows the output range to be optimized for a variety of measurement techniques. The divide-by-1 or straight-through output can be used with a frequency counter, pulse accumulator, or highspeed timer (period measurement). The divided-down outputs may be used where only a slower frequency counter is available, such as low-cost microcontroller, or where period measurement techniques are used. The divideby-10 and divide-by-100 outputs provide lower frequency ranges for high resolution-period measurement. Measuring the Frequency The choice of interface and measurement technique depends on the desired resolution and data acquisition rate. For maximum data-acquisition rate, period-measurement techniques are used. Using the divide-by-2 output, data can be collected at a rate of twice the output frequency or one data point every microsecond for full-scale output. 30

3. The Demonstrator

3.3. The Transducers

Period measurement requires the use of a fast reference clock with available resolution directly related to reference-clock rate. Output scaling can be used to increase the resolution for a given clock rate or to maximize resolution as the light input changes. Period measurement is used to measure rapidly varying light levels or to make a very fast measurement of a constant light source. Maximum resolution and accuracy may be obtained using frequencymeasurement, pulse-accumulation, or integration techniques. Frequency measurements provide the added benefit of averaging out random- or highfrequency variations (jitter) resulting from noise in the light signal. Resolution is limited mainly by available counter registers and allowable measurement time. Frequency measurement is well suited for slowly varying or constant light levels and for reading average light levels over short periods of time. Integration (the accumulation of pulses over a very long period of time) can be used to measure exposure, the amount of light present in an area over a given time period. Steering For the steering of the vehicle the exact same type of servo as for moving the infrared sensors is used. Speed Control The digital speed control unit has a very similar operation method as the servo. The input to the control line is a pulse-code modulated signal from which the desired speed of the vehicle is derived. Applying a 1.5-ms signal leaves the motor in neutral, a 1-ms signal accelerates the car to full forward speed and a 2-ms signal results in full reverse speed (see Figure 3.10). The forward speed can be regulated by applying a appropriate pulse-length between 1ms and 1.5ms. Pulse lengths between 1.5ms and 1.75ms result in an active break, i.e. the vehicle stops immediately resp. holds it current position. Pulse lengths between 1.75ms nd 2ms regulate the reverse speed.

3.3.3

The Software

Infrared Sensor The infrared sensors have their own internal representation for their distance to a measured object. Since every sensor has a unique representation the sensors must be calibrated prior to use. The measured data values are also conditioned by the infrared sensor node to a common representation of the sensor’s distance to the measured object. This common representation is a division of the sensor’s range into nine sectors:

31

3. The Demonstrator

3.3. The Transducers

Figure 3.10: Timing diagram for the speed control unit.

Sector 0 1 2 3 4 5 6 7 8

Distance (cm) 80

Stored Data The following data is stored in the infrared node’s memory: 7

6

5

4

3

2

1

0

Raw Sensor Data

File 9 Record 1 Byte 0 7

6

5

4

3

Confidence

2

1

reserved

File 9 Record 1 Byte 1

32

0

3. The Demonstrator

Raw Confidence

3.3. The Transducers

8bit The output value of the sensor according to its specification. 4bit The confidence marker is needed because the sensor needs approximately 70 msec in between two measurements. As time progresses the measurement value becomes more and more uncertain. The confidence marker is a way to handle this situation.

Transmitted Data The infrared sensor nodes send two data bytes on the bus: the first byte contains distance information and the second byte is the confidence marker of the measured value. Two modes of operation are possible. In each mode of operation the layout of the transmitted data is different: 1. The node returns the the raw sensor data and the confidence of the measurement. It is not in the infrared sensor’s responsibility to calculated the appropriate distance of the measured object. 7

6

5

4

3

2

1

0

1

0

Raw Sensor Data

Byte 0 7

6

5

4

3

reserved

2

Confidence

Byte 1 2. The node returns preprocessed sensor data, i.e. the distance to the measured object is already calculated. 7

6

5

4

3

2

1

0

2

1

0

Distance

Byte 0 7

6

5

4

3

reserved

Confidence

Byte 1

33

3. The Demonstrator

3.3. The Transducers

Ultrasonic Sensor The ultrasonic sensor has a range of up to 10 m but for this application the maximum distance of 5 m is sufficient. Thus a accuracy of approx. 2 cm could be reached using an 8bit value, but again for this application it is sufficient to divide the range of the sensor into sectors of 10 cm. Stored Data The following data is stored in the ultrasonic node’s memory: 7

6

5

4

3

2

1

0

Distance

File 9 Record 1 Byte 0 7

6

5

4

3

reserved

2

1

0

Confidence

File 9 Record 1 Byte 1 Distance Confidence

8bit The distance of a measured object as a multiple of 10 cm. 4bit The Confidence Marker.

Transmitted Data Like the infrared sensor nodes the ultrasonic sensor nodes send two data bytes on the bus: the first byte represents the sector in which the measured object lies, and the second byte is the confidence marker of the measured value. The node transmits the current distance of a measured object and it lies in the navigation node’s responsibility whether the object is too close or not. Byte 0: 7

6

5

4

3

2

1

0

2

1

0

Distance

Byte 1: 7

6

5

4

3

reserved

Confidence

Servo The servos are programmed to move only if there is an object within the infrared sensor’s range, otherwise they stand still in a predefined position. The knowledge whether there is an object or not is gained by reading the 34

3. The Demonstrator

3.3. The Transducers

vehicles speed from the bus, because the speed is decreased if the ultrasonic sensors pick up an object within 120 cm of the vehicle. Hence the servos move only if the vehicle is driving ”slow forward”, and stand still if the vehicle is at a stop, moves forward with normal speed or is in reverse gear. The three servos can assume 13 positions each, where the initial position of the middle sensor is straight ahead and 10◦ to the outside for the two other servos. Stored Data The following data is stored in the servo node’s memory: 7

6

5

4

3

2

1

0

Servo Position

File 9 Record 1 Byte 0 7

6

5

4

3

2

1

0

Speed

File 9 Record 2 Byte 0 Servo Position Speed

8bit The current position of the servo as value from 0 to 12. 8bit The current speed of the vehicle.

Transmitted Data The servo node sends one data byte to the bus representing it’s current position. Byte 0: 7

6

5

4

3

2

1

0

Servo Position

Received Data The servo node receives one byte from the bus containing the information about the vehicles current speed. Byte 0: 7

6

5

4

3

2

1

0

Speed

Light Sensor The light sensor node is programmed to distinguish between three different states ambient light, which is achieved by using two threshold values. 35

3. The Demonstrator

3.3. The Transducers

Stored Data 7

6

5

4

3

2

1

0

Light strength

File 9 Record 1 Byte 0 7

6

5

4

3

2

1

0

Threshold 1

File 9 Record 1 Byte 1 7

6

5

4

3

2

1

0

Threshold 2

File 9 Record 1 Byte 2 7

6

5

4

3

2

1

reserved

0

Mode

File 9 Record 1 Byte 3 Light strength Threshold 1 Threshold 2 Mode

8bit The current value of ambient light. 8bit Determines at which value the lights are switched on. 8bit Determines at which value the head lights are switched on. 1bit Describes the mode of operation.

Transmitted Data The light sensor node sends one data byte to the bus containing the current state of ambient light. Two modes of operation are possible. In each mode of operation the layout of the transmitted data is different: 1. The node returns the current light strength and it lies in the light emitter node’s responsibility to switch the lights on. Byte 0: 7

6

5

4

3

2

1

0

Light strength

2. The light sensor node instructs the light emitter nodes to switch the lights on. Byte 0: 7

6

5

4

3

2

Turn lights on/off

36

1

0

3. The Demonstrator

3.3. The Transducers

Steering The steering node is programmed to hold the connected servo to a position stored in the node’s memory. Thus the servo can assume five different positions which are equivalent to the vehicles driving direction. The steering node receives the desired driving direction from the navigation node. Stored Data The following data is stored in the steering node’s file system: 7

6

5

4

3

2

1

0

Direction

File 9 Byte 0 Direction

8bit The vehicle’s desired direction.

Received Data The steering node receives one byte from the bus containing the information about the vehicle’s desired driving direction. Byte 0: 7

6

5

4

3

2

1

0

Direction

Speed Control The speed control node is programmed to instruct the connected digital speed control unit on the vehicle’s desired speed. Four different speed control commands are available. These commands are transmitted from the navigation node to the speed control node and stored in the node’s memory. Stored Data The following is stored in the speed control node’s file system: 7

6

5

4

3

2

1

0

Speed

File 9 Byte 0 Speed

8bit The vehicle’s desired speed.

37

3. The Demonstrator

3.3. The Transducers

Received Data The speed control node receives one byte from the bus containing the information about the vehicle’s desired speed. Byte 0: 7

6

5

4

3

2

1

0

Speed

Light Emitter The light emitter nodes are programmed to control five LEDs each, where only three are used by a single light node at one time. The LEDs are connected to the node’s output ports as follows: PORTB PIN0 PIN1 PIN2 PIN3 PIN4

light headlight brake light direction indicator - left direction indicator - right

Depending where on the vehicle the light emitter node is placed, different LEDs are connected: Node Position forward/left

forward/right

rear/left

rear/right

Connected LEDs light headlight direction indicator light headlight direction indicator light brake light direction indicator light brake light direction indicator

- left

- right

- left

- right

The light and headlight LEDs are turned on resp. off depending on the ambient light level, the brake light LEDs are turned on resp. off depending on the vehicle’s driving speed, and the direction indicator LEDs are turned on resp. off depending on the vehicle’s driving direction.

38

3. The Demonstrator

3.4. Demonstrator Layout

Stored Data The following data is stored in the light node’s file system: 7

6

5

4

3

2

1

0

Speed

File 9 Record 1 Byte 0 7

6

5

4

3

2

1

0

Direction

File 9 Record 1 Byte 1 7

6

5

4

3

2

1

0

Brightness

File 9 Record 1 Byte 2 Speed Direction Brightness

8bit The current speed of the vehicle. 8bit The current driving direction of the vehicle. 8bit The current ambient light level.

Received Data The light emitter nodes receive three data bytes from the bus: one contains information about the current speed, one contains information about the driving direction, and on contains information about the ambient light . Byte 0: 7

6

5

4

3

2

1

0

2

1

0

2

1

0

Speed

Byte 1: 7

6

5

4

3

Direction

Byte 2: 7

6

5

4

3

Brightness

3.4

Demonstrator Layout

Figure 3.11 shows the layout of the 30 cm x 40 cm wooden board which provides enough space for the 17 cluster nodes, the bus lines, the servos and infrared sensors, the ultrasonic sensors, and the battery blocks.

39

3. The Demonstrator

3.4. Demonstrator Layout

Figure 3.11: Demonstrator layout

3.4.1

Power Supply

There are two different sources of power on the demonstrator. A 7.2 V battery pack and a pair of 9 V battery blocks. This separation of power sources is necessary due to different supply characteristics: Component Servo Infrared Sensor Ultrasonic Sensor Light Sensor Speed Control Unit LED

Supply voltage 4V - 6V 4.4 V - 7 V 4.5 V - 6.8 V 2.7 V - 6 V 5V 2V - 5V

Input Current (peak load) 50 mA 20 mA 100 mA 3 mA 10 mA 10 mA

The infrared sensors, the light sensor, the digital speed control unit, and the LEDs can be supplied by the appropriate node’s output ports. But the servos and ultrasonic sensors would put too much load on these ports and are therefore supplied by a separate power line. The Bus Line The bus line supplies all nodes and therefore the infrared sensors, the light sensor, the digital speed control unit, and the LEDs connected to the appro40

3. The Demonstrator

3.5. Node Implementation

priate nodes. The voltage is taken from two 9 V battery blocks connected in parallel. The Power Line The power line supplies the servos and the ultrasonic sensors. The voltage is taken from a 7.2 V battery pack and regulated to 6 V by a voltage regulator. To account for voltage peaks, originating mostly from the ultrasonic sensors, three capacitors are installed on the power line.

3.4.2

Light Control System

The mobile robots features a light control system consisting of an ambient light sensor positioned in the center of the vehicle, and four light emitter nodes positioned close to each corner. To the vehicle’s front six LEDs are attached, three on either side: a direction indicator and two headlights triggered independently. To the vehicle’s rear also six LEDs are attached, also three on either side: a direction indicator, a rear lamp and a brake light. Depending on the mobile robot’s ambient light, driving direction and current speed the light emitter nodes switch on or off the appropriate LEDs.

3.5

Node Implementation

3.5.1

Master Implementation

The Timer/Counter The master has a baud rate of 19200 bit/sec which results in a bitlength of about 52 µsec. With one TTP/A slot being 13 bit long this results in a slot length of 677 µsec. The master uses a 16bit Timer/Counter running at 7.3728 MHz. To trigger the master every 677 µsec the microcontroller’s timer/counter overflow interrupt is used. This means that if the timer/counter is started a value of 0xEC7F the overflow interrupt occurs after exactly 677 µsec. But due to the runtime of some overhead code this interrupt has to occur slightly earlier which means that the timer/counter has to be initialized to a value of 0xECB3. The State Machine The master is implemented as a state machine. The transitions between the states occur at predefined points in time and are triggered by the microcontrollers timer/counter. There are four main states in this state machine: SELECT: The master reads from the file system which round comes next.

41

3. The Demonstrator

3.5. Node Implementation

SYNC ROUND: The master sends a synchronization pattern in the current slot. RODL ROUND: Depending on the current operation the master sends a fireworks byte initiating a Multipartner round, sends a data byte, or receives a data byte from the bus. MS ROUND: Depending on the current operation the master sends a fireworks byte initiating a Master/Slave round, sends a data byte, or receives a data byte from the bus. As can be seen above, in the states RODL ROUND and MS ROUND there is an additional state variable operation which further distinguishes the current state of the master: SEND DATA: The master sends a data byte in the current slot. REVCEIVE DATA: The master receives a data byte in the current slot and stores it at a predefined position of it’s file system. SEND FB: The master sends a fireworks byte initiating either a Multipartner round (RODL ROUND) or a Master/Slave round (MS ROUND). MSR: The master sends data for the Master/Slave Request round to a specific slave. MSD: The master either sends or receives data for the Master/Slave Data round to resp. from a specific slave. Multipartner Rounds If a Multipartner Round is executed the appropriate RODL file is processed. The master reads resp. writes in every slot and stores the received data bytes from the slaves in the memory locations specified in the RODL records. Master/Slave Rounds If a Master/Slave round is executed the master must distinguish between a Master/Slave Request round and a Master/Slave Data round. Master/Slave Request Round The data for the Master/Slave round is stored in the master’s memory. Therefore all it has to is read this data and send it in the correct slots. Master/Slave Data Round Depending on the operation code of the Master/Slave round the master has to send or receive data to resp. from the specified slave. Again this data data is stored in the master’s memory.

42

3. The Demonstrator

3.5. Node Implementation

The UART The Atmel microcontrollers feature a hardware UART which makes it very comfortable to send and receive data to and from the bus. Sending Data The only thing which has to be considered when sending data through the UART is the correct parity of the data byte. If the data byte to be sent is a fireworks byte, it must have odd parity. Otherwise even parity is required. Receiving Data When the microcontroller receives a data byte from the UART a interrupt is triggered. This interrupt is handled by the master as follows. First the data byte is checked for its parity. Since there is only one master in the cluster and therefore the data byte must come from a slave, the parity must be even. Then the master checks if the received data byte needs to be stored in the file system. If so the data is written to a predefined storage location. In case of a Master/Slave round the check byte must be calculated and compared. The Interface File System (IFS) The implementation of the IFS uses the exact same data layout as described in the TTP/A specification [Kop01] with the only difference that the files have a fixed length of 16 records (64 bytes). Each time the microcontroller is restarted the file system is initialized from the microcontrollers EEPROM. record 0 Header Record

record 1 RODL Record1 .. .

... ...

record 15 RODL Record15

7 8 9 .. .

Header Record Header Record Header Record

RODL Record1 CONFIG Record DATA Record .. .

... ... ...

RODL Record15 CONFIG Record DATA Record

61 62

Header Record Header Record

DATA Record DOC Record

... ...

DATA Record DOC Record

0 .. .

Table 3.1: Master File System Layout

43

3. The Demonstrator

3.5. Node Implementation

The RODL Files The RODL files contain information on which node sends in a current slot. The master is active in every slot and either writes data to the bus or receives the data bytes sent by the slaves and stores them in predefined memory locations in the IFS. The Configuration File the slaves aliases.

The configuration file contains information about

The Data Files The data files contain all kinds of data, but mainly the information sent by the slaves are stored here. The Documentation File The documentation file contains the master’s electronic datasheet. The RODL Sequence To guarantee real-time behavior of the TTP/A cluster the sequence of RODLs must be known to the master throughout operation. Therefore it is necessary that this sequence is stored in the masters memory. Also the Fireworks Bytes are stored in a the masters memory. Note that for each RODL two different Fireworks Bytes have to be stored because of the changing Round Discrimination Bit. For this implementation of the TTP/A protocol all data concerning RODLs can be found in RODL file 5. The Fireworks Bytes are stored in records 1-4, the length of the so called RODL-sequence and a counter which indicates the current position in the RODL-sequence are stored in the first two bytes of record 5. The RODL-sequence itself starts in record 6 and its length is limited only by the file size, i.e. with the current file size of 16 records the maximum size of the RODL sequence would be 10 records (40 bytes): File 5

HR

Fireworks Bytes (4 Recs)

RODLmax & RODLpos

R1

Thus the master has quick access to the information which RODL is to execute next, and an application (TTP/C, user, . . . ) can change the sequence online. It is suggested that Master/Slave rounds are executed very frequently as they are used to obtain information on a nodes status. Which node is accessed during a Master/Slave round is determined by an IFS entry, which can be altered online by an application (TTP/C, user, . . . ) as needed. Data for the Master/Slave Round For this implementation of the TTP/A protocol the information for the Master/Slave Request round is stored in in the first three bytes of record 1 of RODL file 6. The check byte is calculated on the fly. The data sent resp. received during the Master/Slave Data round is stored in record 2 of RODL file 6: 44

...

3. The Demonstrator

File 6

Header Record

3.5. Node Implementation

M/S-Request

M/S-Data

Note that only two records of RODL file 6 are used. The remaining file entries can be used for DATA. Plug and Play Support This master implementation also supports Plug&Play. Thus it is possible to connect, identify and integrate a new node to the cluster at any time. More information on this can be found at [Pet01]. Monitoring Support Monitoring of the TTP/A cluster as described in [Pet01] and [Obe01] is also implemented.

3.5.2

Slave Implementation

The slave implementation is based on a previous TTP/A implementation for the Atmel ATmega103 MCU [PS00]. The Timer/Counter The slaves have a baud rate of 19200 bit/sec which results in a bitlength of about 52 µsec. With one TTP/A slot being 13 bit long this results in a slot length of 677 µsec. The slaves use a 16bit Timer/Counter running at 1/8 of the oscillator’s frequency of 7.3728 MHz, i.e. at 921,6 kHz. To trigger the slave every 677 µsec the microcontroller’s timer/counter overflow interrupt is used. This means that if the timer/counter is started a value of 0xFD8F the overflow interrupt occurs after exactly 677 µsec. But due to the runtime of some overhead code this interrupt has to occur slightly earlier which means that the timer/counter has to be initialized to a value of 0xFD99. The State Machine The slave is implemented as a state machine. The transitions between the states occur at predefined points in time and are triggered by the microcontrollers timer/counter. There are eight main states in this state machine: STARTUP: This is the slave’s initial state where it simply waits for a Synchronization Pattern from the master. Upon reception of this pattern the slave’s state changes to RECEIVE FIREWORK. IDLE: The slave has to do nothing but wait for a counter to run out.

45

3. The Demonstrator

3.5. Node Implementation

RECEIVE FIREWORK: In this state the slaves waits for a Fireworks Byte from the master. The Fireworks Byte can simply be a Synchronization Pattern, or either call for a Multipartner or a Master/Slave round. In case of a Multipartner round the appropriate RODL file is read and data is prepared. The processing of a Master/Slave round is described later in more detail. SEND: First it is determined if the send operation has to be performed in the current slot. If so the data byte is sent and preparations for the next slot are made. Otherwise the send operation is simply suspended until the correct slot arrives. RECEIVE: First it is determined if the receive operation has to be performed in the current slot. If so the data byte is received and stored at the specified location in the memory, and preparations for the next slot are made. Otherwise the receive operation is simply suspended until the correct slot arrives. MS REQ: All four data bytes sent by the master are received and processed. MS DATA: In case of a WRITE operation by the master, four data bytes plus one check byte are received and stored at the specifies location in the memory. In case of a READ operation by the master, four data bytes plus one check byte are sent. IRG: The Inter Round Gap is used to prepare data for the Master/Slave rounds. Multipartner Rounds When executing a Multipartner round the appropriate RODL file is read and the first RODL record is analyzed. The RODL record contains information what to do in which slot of the round. Therefore it is determined if its either a READ or WRITE operation and when this operation has to be executed. Master/Slave Rounds When executing a Master/Slave round the node must distinguish between a Master/Slave Request round and a Master/Slave Data round. Master/Slave Request Round The first byte sent by the master during a Master/Slave Request round is the node alias. This received node alias has to be compared with the node’s own alias to determine whether the node is involved in this Master/Slave round or not. 46

3. The Demonstrator

3.5. Node Implementation

If so, the remaining data bytes are received and, depending on the received operation code, data has to be prepared for a SEND operation or the specified memory location has to be prepared for a RECEIVE operation. These preparation steps are taken during the following Inter Round Gap. If the node is not involved in this Master/Slave round then it’s state changes to IDLE for the rest of the Master/Slave Request round. Master/Slave Data Round If the comparison of the node alias during the Master/Slave Request round showed that the node is involved in this Master/Slave Data round, then data has to be sent to or received from the master, depending on the received operation code during the Master/Slave Request round. The Interface File System (IFS) Due to the limited available resources the slave’s IFS is implemented as follows: Only two RODL files with five records each are provided, the configuration file 8 has only one record, two data files with four records each are available for storing data, and finally the documentation file 62 has one record. 0 1 2 ... 7 8 9 10 11 ... 61 62

RODL 0 (5 records) RODL 1 (5 records) RODL 2 (not implemented) ... RODL 7 (not implemented) CONFIG file (1 record) DATA for RODL 0 (4 records) DATA for RODL 1 (4 records) DATA file (not implemented) ... DATA file (not implemented) DOC file (1 record)

The RODL Files The RODL files contain information on when to send or receive one or more data bytes to resp. from the bus. The Configuration File The configuration file contains the node’s Serial and Series Numbers as well as the node’s alias. The Serial and Series Number are a integral part in the process of baptizing a new node [Pet01]. The node alias is a unique number within the cluster and helps the master to distinguish between the nodes, e.g. during Master/Slave rounds. 47

3. The Demonstrator

3.5. Node Implementation

The Data Files The data files contain all data that is either generated by a connected sensor and sent to the bus, or received from the bus and sent to a connected actuator. The Documentation File The documentation file contains the node’s electronic datasheet. Plug and Play Support This slave implementation also supports Plug&Play. Thus it is possible to connect, identify and integrate a new node to a cluster at any time. More information on this can be found at [Pet01].

3.5.3

Timing Issues

As the TTP/A protocol is a real-time protocol the timing is of utmost importance. Thus one has to be very careful when to send data resp. when to expect to receive data. Also the clock drift of the nodes has to be considered and corrected from time to time. Sending Data As the timer/counter interrupts arrive every 677 µsec and data must be sent almost immediately, this data has to be prepared for sending in the precedent slot. This is achieved by parsing the next RODL record and in case of a send operation reading the required data byte from the memory into a buffer. Send after Send The operation of the current slot is SEND DATA and the operation of the following slot is also SEND DATA. As the sending of the data byte to the UART only takes a very small amount of time compared to the slot length, there is enough time to prepare the data for the following slot (see Figure 3.12).

Figure 3.12: Sending data in two successive slots.

48

3. The Demonstrator

3.5. Node Implementation

Send after Receive The operation of the current slot is RECEIVE DATA and the operation of the following slot is SEND DATA. As Figure 3.13 shows, there would be too little time to prepare the data in time. Hence the data for the following slot is prepared at the begin of the current slot, just like described above. The drawback of this approach is that it eliminates the possibility to receive data and send the same data in the very next slot.

Figure 3.13: Sending data after receiving data in the precedent slot leaves too little time for data preparation.

Receiving Data When receiving data there is not such a timing problem as with sending data since the data receive interrupt comes towards the end of a slot. Thus there is enough time to prepare the storage location for the data to be received (see Figure 3.14).

Figure 3.14: Receiving data in two successive slots.

Resynchronization In order to synchronize the slaves with the master’s notion of time the slave’s clocks are reset to a certain value in any of the following cases: Synchronization pattern The master sends the so-called Synchronization Pattern very frequently prior to a Multipartner or Master/Slave 49

3. The Demonstrator

3.5. Node Implementation

round. The data sent in this pattern is 0x55 with odd parity (see Figure 3.15) which is used to measure the master’s bitlength if there is no hardware UART available. Whenever the slaves receive this pattern, they resynchronize to the master.

Figure 3.15: The Synchronization Pattern.

Fireworks Byte The Fireworks Byte signals the start of a new Multipartner or Master/Slave round. Since it is important that all slaves are in synch with the master at this point in time, the Fireworks Byte is used for resynchronization. The Fireworks Bytes can be recognized by the slaves by having a odd parity. Read/Sync Message During long Multipartner rounds it can happen that the slave’s clock drift becomes so sever, that correct operation is no longer possible for the rest of the Multipartner round. Therefore Read/Sync messages are scheduled. These messages are issued by the master and do not necessarily contain any useful data. Their main task is to provide the slaves with the current global time as seen by the master. All slaves have to read in this specific slot in order to resynchronize with the master. The resynchronization itself is implemented as follows: Whenever a slave receives a Synchronization Pattern, a Fireworks Byte, or a Read/Sync Message, it’s timer/counter is set to generate an interrupt after a time period equivalent to 2bit. Thus the timer/counter is set to 0xFFB0 to generate an interrupt after 104 µsec (see Figure 3.16).

3.5.4

TTP/A Round Layout

The layout of the Multipartner round designed for the correct operation of the vehicle is stored in RODL file 1 of every node in the cluster. The round starts out with the Fireworks Byte sent by the master. The second slot is also occupied by the master (this is necessary for slave nodes using a software UART). After these two initiating slots the servos, infrared sensors, ultrasonic sensors, and the light sensor send their data. The navigation node 50

3. The Demonstrator

3.6. Application

Figure 3.16: Resynchronization

receives all this information, calculates the best driving direction and speed, and transmits the results in the last two slots of the round to the steering node and the speed control node. The light emitter nodes do only receive data in last three slots (see Figure 3.17). This Multipartner round is 18 slots long which makes it necessary to insert Read/Sync messages sent by the master. They were introduced to account for the slave node’s clock drift during long Multipartner rounds. Testing showed that Read/Sync messages should be at most eight slots apart. Thus a first Read/Sync message is sent eight slots after the Fireworks Byte and a second one is sent six slots after the first Read/Sync Message.

3.6

Application

The main application of the demonstrator is of course driving around without crashing into an obstacle. All necessary parts for this task are already available and all that is left to do is putting it all together.

3.6.1

Data Acquisition

Infrared Sensor The Sharp infrared sensors have a range of up to 80 cm and one measurement takes approx. 80 ms. Since each sensor can only measure a sector with a width of 5 cm at one time it is necessary to sweep the sensors to get acceptable measurements of the vehicle’s close surroundings. Therefore the sensors are mounted on servos moving through 13 different positions in a sensor field of 120◦ . This angle is caused by the physical properties of the servo. Because of their limited range, the infrared sensors are used for the sensing of the vehicle’s close proximity. Therefore the servos are standing still at a predefined position if the ultrasonic sensors don’t pick up an object in the infrared sensor’s range. This helps to conserve energy and simplifies

51

3. The Demonstrator

3.6. Application

Slot 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

Sender Master Master Servo 1 Infrared 1 Infrared 1 Servo 2 Infrared 2 Infrared 2 Master Servo 3 Infrared 3 Infrared 3 Ultrasonic 1 Ultrasonic 1 Master Ultrasonic 2 Ultrasonic 2 Light Sensor

Data Fireworks Byte Read/Sync Position Distance Confidence Marker Position Distance Confidence Marker Read/Sync Position Distance Confidence Marker Distance Confidence Marker Read/Sync Distance Confidence Marker Ambient Light Level

18

Navigation

Speed

19

Navigation

Direction

Receiver all Nodes all Nodes Navigation Navigation Navigation Navigation Navigation Navigation all Nodes Navigation Navigation Navigation Navigation Navigation all Nodes Navigation Navigation Light Emitter Speed Control Light Emitter Servo 1,2,3 Steering Light Emitter

Figure 3.17: Round Layout for RODL File 1.

the navigation algorithm. Figure 3.18 shows the three infrared sensors attached to the vehicle’s front with their sensor field of 120◦ and a range of approx. 80 cm. Servo The servos move the infrared sensors to a new position every 0.2 sec. As the movement itself takes some time, a sensor does not hold its position for the whole 0.2 sec but enough time for two measurement cycles remains. It is not possible to decrease this timing constant because testing revealed that due to the time taken to reach a new position every time value below 0.2 sec would result in heavy jitter of the servo and thus in the loss of control over the servo’s position.

52

3. The Demonstrator

3.6. Application

Figure 3.18: The sensor field covered by the three infrared sensors.

Sensor Sweep Algorithm In order to sweep through the sensor its field, each infrared sensor can assume any of the 13 predefined positions depicted in Figure 3.19.

Figure 3.19: The sensor’s predefined positions.

There are three different approaches of how the servos can sweep the infrared sensors through their sensor fields. These approaches are distinguished by the sequence the servos assume any of the 13 positions. Windshield Wiper This approach measures each sector in the sensor field twice in one back and forth wipe but has the huge disadvantage of different time intervals between measurements of the different sectors, i.e. sectors in the center of the sensor field are measured in more even intervals than sectors on the edge of the sensor field (see Figure 3.20a). In these sectors the time between two subsequent measurements may 53

3. The Demonstrator

3.6. Application

vary between 0.2 sec and 5.2 sec. Alternating Windshield Wiper This approach measures every second sector in the sensor field in its forward wipe and the other half in its backward wipe (see Figure 3.20b). Therefore the intervals between two subsequent measurements of the same sector are constant at approx. 2.6 sec. Although the jump between two subsequent positions is larger and therefore takes more time, enough time for the infrared sensors’ measurements remains. Jump Back This approach measures each sector in the sensor field in one forward wipe, but instead of measuring on its backward wipe (like the ”Windshield Wiper”) the sensor jumps back to the first sector of the sensor field (see Figure 3.20c). The disadvantage of this approach is that the time taken for the ”jump” may take longer than 0.2 sec, depending on the power source’s status. In that case the measurement at sensor position 1 would be skipped. As this is not acceptable, a solution to this problem is to wait with the measurement at position 1 until the servo assumes this position. But in doing so the sensor does not provide any measurements during the time tjump taken for the ”jump”.

Figure 3.20: Sensor sweep algorithms: a) Windshield Wiper, b) Alternating Windshield Wiper, and c) Jump Back; the numbers are corresponding to the sensor’s position

One measurement for one sector takes 80 msec but the servos move to a new position only every 0.2 sec. Therefore the times for one full sensor sweep can be calculated:

54

3. The Demonstrator

3.6. Application

Algorithm Windshield Wiper Alt. Windshield Wiper Jump Back

Time 5.2 sec 2.6 sec 2.6 sec+tjump

where tjump depends on the battery pack’s status. As the alternating windshield wiper proved to be most suitable it was chosen for this application. Ultrasonic Sensor The Polaroid ultrasonic sensors have a range of up to 10m. The length of the measurement depends on the object’s distance to the vehicle and varies between 5 msec and 30 msec. The ultrasonic sensors are used for long distance scanning of what lies straight ahead of the vehicle. The two ultrasonic sensors are attached directly behind the infrared sensors and slightly elevated. Each sensor is directed approx. 5◦ off the vehicle’s driving direction (see Figure 3.21)

3.6.2

Data Fusion

The vehicle uses five sensors to observe it’s surroundings. For the three infrared sensors it takes at least 2.6 sec to complete a full sensor sweep through 13 sectors. To take advantage of the multitude of information and to ”remember” older sensor values the sensor data is collected and stored in one data structure, the Sensor Grid. Sensor Grid The Sensor Grid describes the surroundings of the vehicle. It is a twodimensional data structure where each cell represents a 10 cm x 10 cm space (see Figure 3.22). Each cell holds a 4bit certainty value determining whether there is an obstructing object at that location (value 0xF) or there is nothing at that location (value 0x0). As the vehicle moves within the Sensor Grid some values are updated by new measurements and other values become uncertain. Therefore the values in the Sensor Grid converge towards an uncertain state (value 0x7) where it is impossible to say whether there is an obstruction at that location or not. This update of the Sensor Grid’s values occurs at well defined times, e.g. whenever the vehicle has moved 10 cm. Keeping the Grid up-to-date It is very important that the sensor grid is always up-to-date as the navigation of the vehicle depends upon this information.

55

3. The Demonstrator

3.6. Application

Figure 3.21: The sensor field covered by the two ultrasonic sensors.

New Sensor Measurements Whenever new measurements of a certain sector are available, the certainty values in this sector are updated. If an obstructing object was detected the values in the cells representing the location of the object are set to 0xF. The cells in the ”shadow” of the object maintain their current value and all other cells in the sector are set to 0x0. Another approach would be to set the cells in the ”shadow” of an object to 0x7 as it is unknown what lies behind the object, but testing showed a loss of information as already known data would be erased. Standing Still When the vehicle stays at one location without forward or backward movement, the sensor grid is constantly updated as the sensors continue their measurements. Straight Ahead Driving When the vehicle moves straight ahead, the certainty value of each grid cell is moved one cell ”down” in the sensor grid

56

3. The Demonstrator

3.6. Application

Figure 3.22: The vehicle’s sensor grid.

as the vehicle advances one cell in the real world. Additionally each value is decreased by one if it is higher than 0x7 or increased by one if it is lower than 0x7. If the cell’s value is 0x7, it stays 0x7. The top row of the sensor grid is initialized with uncertain values (0x7) as it is unknown what lied beyond the border of the sensor grid (see Figure 3.23).

Figure 3.23: Updating the sensor grid while driving straight.

Driving a Curve Things become more complicated when the vehicle drives a curve. The angle of the steering and the speed of the vehicle must be considered to update the grid values accordingly. But since it is not the thesis’ main focus to describe an algorithm for updating the grid while driving a curve, a very simple approach is taken here: The value of each grid cell is moved one cell ”down” and one cell to the left resp. to the right, 57

3. The Demonstrator

3.6. Application

according to the driving direction. As before, values are decreased by one if they are higher than 0x7 or increased by one if they are lower than 0x7. If the cell’s value is 0x7, it stays 0x7. Since this is a very rough approximation the vehicle is slowed down significantly on order to account more for the sensor measurements. Once the steering is reset to straight ahead driving the vehicle accelerates to its normal speed (see Figure 3.24).

Figure 3.24: Updating the sensor grid while driving a curve.

Reverse Gear When the vehicle moves in reverse gear, i.e. straight back, the certainty value of each grid cell is moved one cell ”up” in the sensor grid as the vehicle advances one cell in the real world. Additionally each value is decreased by one if it is higher than 0x7 or increased by one if it is lower than 0x7. If the cell’s value is 0x7, it stays 0x7. The bottom row of the sensor grid is initialized with uncertain values (0x7) and the values directly ”above” the vehicle are set to 0x0 as it can be assumed that the vehicle moves through unobstructed space (see Figure 3.25). Since the vehicle has no sensors attached to its rear, it is not possible to drive a curve while in reverse gear.

3.6.3

Navigation

Now that the surroundings of the vehicle are known and stored in a easily accessible data structure the next step is to navigate in this environment. As the vehicle’s main focus is obstacle avoidance, an algorithm for finding an unobstructed path has to be applied.

58

3. The Demonstrator

3.6. Application

Figure 3.25: Updating the sensor grid while in reverse gear.

Vector Field Histogram This approach divides the sensor grid into disjoint polar sectors Sk (see Figure 3.26). For each cell Cij in a give sector an obstacle vector mij is

Figure 3.26: Dividing the sensor grid into polar sectors. calculated. The magnitude of mij is dependent of the certainty value cij of the cell and also of the distance between the vehicle and the cell Cij . mij = (cij )2 (a − bdij ) with a − bdmax = 0, a and b are positive constants and dmax is the distance between the farthest cell and the vehicle. 59

3. The Demonstrator

3.7. Monitoring

Hence the sum over all obstacle vectors mij in sector Sk form an obstacle density entity hk . hk =

X

mij ,

k = 1, . . . , n.

Cij ∈Sk

At this point the entities h1 , h2 , . . . , hn are used to form a histogram (see Figure 3.27), which can be used for obstacle avoidance. Areas in the histogram

Figure 3.27: Histogram of the obstacle density. where the vector magnitudes are big indicate regions with high obstacle density, while areas with low vector magnitudes indicate regions with low obstacle density. By adapting a threshold to the histogram it is possible to localize regions of sectors with low obstacle density which can be used for obstacle avoidance.

3.7

Monitoring

During the work on the project the need to look ”into” the nodes emerged. Even though an emulator to test the function of one single node was available, it proved to be difficult to test the whole cluster. Therefore a monitoring tool [Obe01] [Pet01] was developed which, once connected to the master node, is able to view and change the interface file system (IFS) of any node in the cluster. Also the RODL files can be read and viewed in a plain form (see Figure 3.28). This tool proves to be useful when it comes to testing the master/slave communication within the cluster. As all information concerning mas60

3. The Demonstrator

3.7. Monitoring

Figure 3.28: View of a RODL file.

ter/slave rounds is stored in the master’s IFS, this data can be changed at any time to initiate communication with different nodes. Another feature of this monitoring tool is the possibility to create snapshots of the memory layout, e.g. Figure 3.29 shows the status of the sensor grid.

61

3. The Demonstrator

3.7. Monitoring

Figure 3.29: Snapshot of the sensor grid.

62

Chapter 4

Evaluation 4.1 4.1.1

Results Programming Software

All software for the car was written in C. The code was generated with the ”avr-gcc” compiler1 that is distributed under the GNU public license. This is a porting of the well known GNU-compiler for the Atmel AVR microcontroller by Denis Chertykov. The compiler is ANSI-C compatible and produces quite a good code although frequent problems with function definitions occurred where variables were simply overwritten with random values. Node Master AT90S8515 Navigation AT90S8515

Available Resources 8K ROM 32K RAM 8K ROM 32K RAM

Sensor

4K ROM

AT90S4433

128Byte RAM

Actuator

4K ROM

AT90S4433

128Byte RAM

Used Resources 4K ROM 16K RAM 8K ROM 8K RAM 3.4K ROM (protocol) 0.5K ROM (application) 128Byte RAM 3.4K ROM (protocol) 0.4K ROM (application) 128Byte RAM

Utilization 50% 50% 100% 25% 98% 100% 95% 100%

Table 4.1: Memory requirements

Table 4.1 shows the memory requirements for the implementation of the protocol and the application for the master and slave nodes. The application values are typical for sensors and actuators. 1

more information available at http://www.gnu.org and http://8bit.at/avr/.

63

4. Evaluation

4.1.2

4.1. Results

Using TTP/A

As this project was set out to be a demonstrator for the TTP/A protocol there was no other choice of communication protocol. But the concept of TTP/A proved to be very useful for the task at hand. The existence of the interface file system in every node makes it possible to store sensor data locally without the need to send it immediately after acquisition. Also due to the usage of smart transducers it is possible to condition the data locally which takes load off the navigation node. The time division multiple access (TDMA) bus arbitration scheme guarantees timely exchange of real-time data.

4.1.3

Synchronization

The synchronization pattern is sent by the master in regular intervals to enable the slave nodes to synchronize to the master’s perception of time. Slaves also synchronize to the master when receiving a fireworks byte. Testing showed that the maximum interval between two synchronizing events must not be greater than eight slots, since the slaves’ perception of time shifts due to their clock drift. If this interval is in fact greater than eight slots, the slave is no longer able to receive messages from the cluster and must be reset. As Figure 3.17 depicts, the application’s standard RODL contains two Read/Sync messages enabling the slaves to participate in this multipartner round although it is 20 slots long.

4.1.4

Separation of Protocol- and Application-Code

A distinct separation of protocol- and application-code was applied throughout the project. This could be achieved by utilizing the two independent timers provided by the Atmel microcontrollers. A 16bit timer was used for timing issues of the protocol and a 8bit timer was used to trigger events in the application. This separation allowed for the reuse the protocol code for all 16 slave nodes regardless of their provided service. The implementation of the smart transducers’ applications was therefore decoupled from the communication between the nodes.

4.1.5

Cluster Communication

For testing the communication within the cluster during real-time rounds an oscilloscope was connected to the bus line. Thus it was easy to find potential sources for errors by checking the communication in every single TTP/A slot. Also the correct timing of the nodes as well as the synchronization to the master’s time base could be observed. 64

4. Evaluation

4.1. Results

Implementing and testing of single nodes on the network proved to be no problem at all. But when it came to connecting several nodes to the cluster simultaneously, strange errors occurred. Most of these errors where easy to correct, but two still exist: • The ultrasonic sensor has a range of up to 10 m. For this application a reduction to 5m was found suitable as the mobile robot is used mainly indoors. While implementing the necessary software to achieved a decent accuracy, no problems occurred. But when finally connecting the sensor to the cluster, the measurement of objects beyond 2.5 m became unreadable due to heavy jitter. After hours of testing and consultation of electrical engineering specialists still no solution to this problem could be found. Therefore the ultrasonic sensors’ range is limited to 2.5 m. • Due to the usage different interrupt service routines (ISR) with each node, interferences between these ISRs occurred. Since no interrupts are handled during an ISR, the most severe problem was missing to send a messages to or receive a message from the bus. This happened in case of the servos moving to their next position, which was achieved by utilizing an ISR after a counter overflow. The problem was solved by preferring UART interrupts to timer interrupts which results in occasional but negligible jitter of the servos.

4.1.6

Navigation Algorithm

The used navigation algorithm implements the vector field histogram method as introduced in Section 2.5.5 with one slight difference: the absence of a target. The implementation required a lot of testing to make sure that the image of the mobile robot’s environment is represented correctly in the sensor grid. This could be achieved by utilizing the monitoring tool. Finally the invested time paid off as the results are quite satisfactory.

4.1.7

Driving Speed

The driving speed of the mobile robot had to be kept low due to the relatively long roundtrip time of one single infrared sensor. As it takes approx. 2.6 sec for each sensor to sweep through its sensor field, a speed of 10 cm/sec (0.36 km/h) was chosen. This value also proved to be useful for the updating interval of the sensor grid’s cell values. In the case of the ultrasonic sensors not detecting an obstacle in their sensor range, the vehicle can accelerate to a speed of up to 50 cm/sec (1.8 km/h). Higher speeds are not recommended in regard to the braking distance.

65

4. Evaluation

4.2. Encountered Problems

If the ultrasonic sensors detect an obstacle which is not in the infrared sensors’ range, the vehicle approaches this obstacle at a speed of approx. 20 cm/sec (0.72 km/h).

4.1.8

Power Consumption

The two 9 V battery blocks supply the nodes, the infrared sensors, the light sensor, the speed control unit, and the LEDs. The 7.2 V battery pack supplies the ultrasonic sensors, the servos, and the motors. Together the power sources can operate the mobile robot non stop for approx. half an hour.

4.1.9

Node Development

Although it took a considerable amount of time the design of the master and slave nodes using the EAGLE Layout Editor2 proved to be an essential part of the project. Without the use of these custom nodes it would not have been possible to structure the mobile robot as shown in Figure 3.11. The designed nodes are currently being used for a subsequent project.

4.2 4.2.1

Encountered Problems The Infrared Sensors

The first problem that arose with the infrared sensors was the discovery that each of the three sensors had to be calibrated differently. This came as an surprise since the output values were specified in the sensors documentation. But this proved to be no problem at all as this could be managed by software. Another problem was the jitter of the measured value when the sensor ”looked” at objects beyond it’s range. After some research on related articles3 the reason was found to be the following: the material used for the sensor’s hull is in fact conductive and therefore needs to be grounded.

4.2.2

Proper Grounding

The use of an ISO-K line bus, which is very susceptible to interference, resulted in various grounding issues to be overcome throughout the project because of the interaction between the bus and the transducer nodes. On one hand the electric power consumption of the actuators disturbs the bus, and the other hand the communication activity of the bus leads to oscillation in the power supply that influences analog electronics.

2 3

available at http://www.cadsoft.de available at http://www.barello.net/Papers/GP2D02/

66

Chapter 5

Conclusion 5.1

Summary

This thesis describes key issues in the design of an autonomous mobile robot. The mobile robot is equipped with a suite of distance sensors for scanning of its environment and actuators for controlling the mobile robot’s driving directions. Together with low-cost microcontrollers, the sensors and actuators combine for smart transducers nodes enabling the mobile robot to react to changes in its environment at any given time, thus making it a real-time system. The nodes are arranged in a decentralized network for reasons of modularity and fault tolerance. The time-triggered protocol TTP/A is used for communication among the nodes of the smart transducer network. This protocol introduces the interface file system (IFS) which is intended to contain all relevant data of a certain node. The IFS acts as a smart transducer interfaces that hides the internal properties of a node and can be reused at higher levels, e.g. after the application of sensor fusion. Multi-sensor data fusion is applied to combine the measurements of the single distance sensors in order to get a unified view of the mobile robot’s environment. For this task a special data structure, the so called sensor grid, is used as a model for the robot’s surroundings. Sensor measurements are stored in this data structure as certainty values, indicating the probability of an obstacle at a certain location. As the mobile robot moves on through its environment, older measurements become more and more uncertain and are subsequently updated by new measurements. This approach guarantees the temporal accuracy of the sensor grid which represents the basis for the navigation algorithm. The main task of the application is to compute a safe passage through the mobile robots surroundings by avoiding obstacles detected by the sensors. A special navigation algorithm is applied on the sensor grid, computing the driving directions for the mobile robot. The sensor grid, representing an

67

5. Conclusion

5.2. Outlook

image of the robot’s environment, is divided into disjoint polar sectors. By adding up the certainty values in each sector a polar histogram is generated that represents the polar obstacle density (POD) for each sector. The sector with the lowest POD is chosen as driving direction. A three-level architectural model is implemented to reduce the system’s overall complexity. The node level consists of the smart transducer nodes interacting physically with the environment by performing measurements with sensors and actions with actuators. The cluster level integrates the smart transducer nodes into the network and performs sensor fusion. At the application level decisions about control values are made based on the data provided by the cluster level. These control values are again used at the node level to perform changes in the mobile robot’s motion. Well-defined interfaces negotiate between the levels. The smart transducer interface handles data flow from node level to cluster level. This is achieved by mapping all relevant data of a node into TTP/A’s interface file system. An image of the environment is the result of operations performed at the cluster level. This image is provided to the application level in the form of a sensor grid. After applying the navigation algorithm the resulting driving directions are provided to the actuator nodes through TTP/A’s smart transducer interface.

5.2

Outlook

An alternative approach to the utilization of the infrared sensors came up during the course of the work and will be explained here. Currently all infrared sensor measurements have the same contribution to the sensor grid data structure. A faulty sensor has to be detected at node level, but will not be verified by comparison to other sensor measurements since two infrared sensors provide the same measurements for the same object only occasionally, i.e. when they point at the same object. A cross-verification could be performed if the sensor grid would not only contain the probability of an obstacle being at a certain location, but also the sensor identifier that performed this measurement. In such a model a measurement will be stored weighted. In the case that a new measurement disagrees with the existing measurement of the same location, the weights of both participating sensors is decreased, otherwise their weight is increased. Because there are three or more sensors, the weight of a single faulty sensor will be repeatedly decreased each time it provides a faulty measurement. The weight of the correct sensors will be decreased when checking against faulty measurements and increased when checks are consistent. Thus faulty sensors diminish their contribution to the sensor grid data structure and recover automatically when they regain correct functionality.

68

Bibliography [BHHJ83]

G. Beni, S. Hackwood, L. A. Hornak, and J. L. Jackel. Dynamic Sensing for Robots: An Analysis and Implementation. In International Journal of Robotics Research, volume 2, pages 51–61, 1983.

[BK91]

J. Borenstein and Y. Koren. The Vector Field Histogram - Fast Obstacle Avoidance for Mobile Robots. In IEEE Journal of Robotics and Automation, pages 278–288, June 1991.

[BRG96]

E. Bosse, J. Roy, and D. Grenier. Data Fusion Concepts Applied to a Suite of Dissimilar Sensors. In Canadian Conference on Electrical and Computer Engineering, 1996, pages 2:692–695, May 1996.

[CCBS86]

K. C. Chang, C. Y. Chong, and Y. Bar-Shalom. Joint Probabilistic Data Association in Distributed Sensor Networks. In IEEE Tranactions of Automation Control, volume AC-31, pages 889–897, 1986.

[DW87]

H. F. Durrant-Whyte. Consistent Integration and Propagation of Disparate Sensor Observations. In International Journal of Robotics Research, volume 6, pages 3–24, 1987.

[DW88]

H. F. Durrant-Whyte. Sensor Models and Multisensor Integration. In International Journal of Robotics Research, volume 7, pages 97–113, 1988.

[DW98]

P. Dierauer and B. Woolever. Understanding Smart Devices. In Industrial Computing, pages 47–50, 1998.

[Elf87]

A. Elfes. Sonar-based Real-World Mapping and Navigation. In IEEE Journal of Robotics and Automation, volume RA-3, pages 249–265, 1987.

[Elm00]

W. Elmenreich. An Introduction to Data Fusion. Technical Report 11, Technische Universit¨at Wien, Institut f¨ ur Technische Informatik, 2000. 69

BIBLIOGRAPHY

BIBLIOGRAPHY

[EP01a]

W. Elmenreich and S. Pitzek. The Time-Triggered Sensor Fusion Model. To appear in Proceedings of the 5th IEEE International Conference on Intelligent Engineering Systems (INES), Helsinki - Stockholm, September 2001.

[EP01b]

W. Elmenreich and S. Pitzek. Using Sensor Fusion in a TimeTriggered Network. Technical Report 2, Technische Universit¨at Wien, Institut f¨ ur Technische Informatik, 2001.

[FN80]

C. A. Fowler and R. F. Nesbit. Tactical C3 , Counter C3 and C5 . In Journal of Electronic Defense, November 1980.

[Fra00]

R. Frank. Understanding Smart Sensors. Artech House, 2nd edition, April 2000.

[GEKW86] F.C.A. Groen, M.A.C. Vreeburg E.R. Komen, and T.P.H. Warmerdam. Multisenosor Robot Assembly Station. In Robotics, volume 2, pages 205–214, 1986. [GR83]

W. S. Gesing and D. B. Reid. An Integrated Multisensor Aircraft Track Recovery System for Remote Sensing. In IEEE Tranactions of Automation Control, volume AC-28, pages 356– 363, 1983.

[Gro98]

P. Grossmann. Mulitsensor Data Fusion. In The GEC journal of Technology, pages 15:27–37, 1998.

[Har86]

S. Y. Harmon. Autonomous Vehicles. In Ecyclopedia of Artificial Intelligence, pages 39–45. Wiley, New York, 1986.

[HCK86]

S. A. Hutchinson, R. L. Cromwell, and A. C. Kak. Planning Sensing Strategies in a Robot Work Cell with Multisensor Capabilities. In IEEE International Conference of Robotics and Automation, pages 1068–1075, San Francisco, CA, April 1986.

[HJ87]

T. L. Huntsberger and S. N. Jayaramamurthy. A Framework for Multi-Sensor Fusion in the Presence of Uncertainty. In Workshop of Spatial Reasoning and Multi-Sensor Fusion, pages 345– 350, St. Charles, IL, October 1987.

[Kal60]

R. E. Kalman. A New Approach to Linear Filtering and Prediction Problems. In Transactions of the ASME, Series D, Journal of Basic Engineering, pages 82:35–45, March 1960.

[KB89]

R. Kuc and B. Barshan. Navigating Vehicles Through an Unstructured Environment With Sonar. In Proceedings of the 1989 IEEE International Conference on Robotics and Automation, pages 1422–1426, Scottsdale, Arizona, May 1989. 70

BIBLIOGRAPHY

BIBLIOGRAPHY

[KdFG87]

K. Krishen, R. J. P. de Figueiredo, and O. Graham. Robotic Vision/Sensing for Space Applications. In IEEE Conference of Robotics and Automation, pages 138–150, Raleigh, NC, March 1987.

[KEM00]

H. Kopetz, W. Elmenreich, and C. Mack. A Comparison of LIN and TTP/A. Research report, Technische Universit¨at Wien, Institut f¨ ur Technische Informatik, Vienna, Austria, April 2000.

[Kha85]

O. Khatib. Real-Time Obstacle Avoidance for Manipulators and Mobile Robots. In IEEE International Conference on Robotics and Automation, 1985, pages 500–505, St. Louis, Missouri, March 1985.

[KHE00]

H. Kopetz, M. Holzmann, and W. Elmenreich. A Universal Smart Transducer Interface: TTP/A. In Proceedings of the 3rd International Symposium on Object-Oriented Real-Time Distributed Computing (ISORC), March 2000.

[Kop97]

H. Kopetz. Real-Time Systems. Kluwer Academic Publishers, 1997.

[Kop01]

H. Kopetz et al. Specification of the TTP/A protocol. Technical report, Technische Universit¨at Wien, Institut f¨ ur Technische Informatik, April 2001. Available at http://www.ttpforum.org.

[LK91]

R. C. Luo and M. G. Kay. Mulitsensor Integration and Fusion in Intelligent Systems. In S. S. Iyengar and A. Elfes, editors, Autonomous Mobile Robots: Perception, Mapping, and Navigation (Vol. 1), pages 201–231. IEEE Computer Society Press, Los Alamitos, CA, 1991.

[LL88]

R. C. Luo and M. Lin. Robot Multi-Sensor Fusion and Integration: Optimum Estimation of Fused Sensor Data. In IEEE International Conference of Robotics and Automation, pages 1076–1081, Philadelphia, PA, April 1988.

[LLS88]

R. C. Luo, M. Lin, and R. S. Scherp. Dynamic Multi-Sensor Data Fusion System for Intelligent Robots. In IEEE Journal of Robotics and Automation, volume RA-4, pages 386–396, 1988.

[May82]

P. S. Maybeck. Stochastic Models, Estimation, and Control, volume 1 and 2. Academic, New York, 1979 and 1982.

[Mor88]

H.P. Moravec. Sensor Fusion in Certainty Grids for Mobile Robots. In AI Magazine, pages 61–74, 1988.

71

BIBLIOGRAPHY

BIBLIOGRAPHY

[Obe01]

R. Obermaisser. Design and Implementation of a Distributed Smart Transducer Network. Master’s thesis, Technische Universit¨at Wien, Institut f¨ ur Technische Informatik, Vienna, Austria, May 2001.

[PAG+ 00]

S. Poledna, H. Angelow, M. Gl¨ uck, M. Pisecky, I. Smaili, G. St¨oger, C. Tanzer, and G. Kroiss. TTP Two Level Design Approach: Tool Support for Composable Fault-Tolerant Real Time Systems. In SAE World Congress 2000, Detroit, MI, USA, March 2000.

[Pet01]

P. Peti. Monitoring and Configuration of a TTP/A Cluster in an Autonomous Mobile Robot. Master’s thesis, Technische Universit¨at Wien, Institut f¨ ur Technische Informatik, Vienna, Austria, May 2001.

[PS00]

P. Peti and L. Schneider. Implementation of the TTP/A slave protocol on the Atmel ATmega103 MCU. Technical Report 28, Technische Universit¨at Wien, Institut f¨ ur Technische Informatik, Vienna, Austria, August 2000.

[Zad65]

L. A. Zadeh. Fuzzy Sets. In Information Control, volume 8, pages 338–353, 1965.

72

Suggest Documents