3D VISION SYSTEM FOR VEHICLES

3D VISION SYSTEM FOR VEHICLES K.Fintzel, R.Bendahan, C.Vestri, S.Bougnoux IMRA EUROPE S.A BP 213, 220 rue ALBERT CAQUOT 06904 SOPHIA ANTIPOLIS FRANCE ...
Author: Dayna Stevens
1 downloads 2 Views 539KB Size
3D VISION SYSTEM FOR VEHICLES K.Fintzel, R.Bendahan, C.Vestri, S.Bougnoux IMRA EUROPE S.A BP 213, 220 rue ALBERT CAQUOT 06904 SOPHIA ANTIPOLIS FRANCE [email protected]

Abstract Throughout this decade, several works about automated driving systems have been dedicated to collision avoidance, and it has already been proved that image technology can be useful in this area. Effectively, image technology provides additional information to the driver of a vehicle to easily understand driving or parking situations. For example, a backing assistance system using a rear video camera was developed by AISIN two years ago. This system is used to realize a safe and easy parking system: the back guide monitor. At the same time ITS related systems were developed to improve, convenience and safety for the users of vehicles. We propose here the next generation of circumstance recognition system. This system provides the driver with a 3D representation of the scene surrounding his vehicle, from images acquired by a rear CCD camera. The most important drawback of current back view 2D systems is that the driver can never see the whole scene at one time. Moreover, our technology is able to memorize in a 3D model, parts of the scene already observed from several shots taken while the vehicle is moving. Using this virtual 3D model, the scene can be observed by the driver from any point of view at every moment. 1 Introduction The objective of our research is to provide a system that will assist a driver during parking situations. This system is the next generation of the back guide monitor product from AISIN [9] [8]. It is based on the adjunction of a camera to the back of a vehicle. This camera produces a continuous flow of images of the scene and our task is to gather the flow of incoming images in order to obtain a 3D representation of the scene. This representation memorizes the different locations of obstacles. As soon as a 3D representation is obtained, a virtual camera can be added in the scene and a fixed view of the maneuver can be shown to the driver, helping him to better understand the situation. Many competitors are developing circumstance recognition systems, using various technologies, such as

S.Yamamoto, T.Kakinami AISIN SEIKI CO., LTD 2-1, ASAHI-MACHI, KARIYA AICHI, 448-8650 JAPAN [email protected] ultrasonic sensors, scanning laser or CCD camera [10] [11]. The technology we propose, in comparison with the conventional circumstance recognition system, gathers useful information for the driver and supplies this information in a comprehensive manner, using our common vision sense. Concretely, from successive images taken at different locations by a single camera in motion, our technology realizes a virtual image, used primarily for warning but also for driving guidance. The final aim is to avoid collision with obstacles by providing information about their locations and their dimensions. With only a single back view, there are parts of the scene that are not seen by the driver. But our technology is able to remember part of the scene already observed and adding a visualization technology, the scene already acquired can be observed by the driver from any point of view. To achieve our final goal of a 3D representation of the scene used by the driver for guidance, several domains of research such as odometry, tracking, 3D reconstruction or texture extraction were combined in an innovative way to give the system presented in section 2. The odometry is used to define the external calibration of the embedded camera. This decisive task is detailed in section 3. Then, the obtained external calibration is used by two applicative processes of 3D vision, which are briefly presented in section 4. These are firstly a tracking process to detect and reconstruct potential obstacles in the scene and secondly a texture extraction process to render a model of elements of the scene as realistic as possible. Finally, section 5 gives concluding remarks concerning our research. 2 Concept of the system The following section presents a general idea of our circumstance recognition system. A car starts at an unknown position in an unknown environment and the system incrementally builds a 3D model of the visited area. To accomplish this task in an effective way, some design choices, like car sensors or environment representation, are necessary. To realize our vehicle monitoring system for parking maneuvers, we combined several technologies from dead reckoning to 3D

computer vision. Our approach makes especially operate odometry and stereo from motion for reconstruction. All the dependencies between different technologies we needed from these fields illustrated by Figure 1.

co3D the are

3D Vision Odometry Hardware Calibration

Tracking

Dead reckoning Reconstruction Vizualization

Texture Identification

The input signals of the system are a CCD camera (5), ABS sensors (4) and reverse gear (4). From these signals, the external parameters of the camera must be computed by odometry as fast as possible, to be used by processes of 3D vision ideally in real time. Real time running of our system must be then considered. At least two important notions must be taken into account. First, the system should run as fast as possible with non-blocking concurrent algorithms and second, information arrives as a stream, so the full information is not available from the beginning of the process. Then, our system has a multi-threading design to permit algorithms co-existence at the same time in parallel, and to allow algorithms communication too. The relationships between the different threads are illustrated by Figure 3. Visualization whatever happens (if requested?)

Visualization

Figure 1: Technology dependencies. Dead Reckoning

Odometry, a specific axis of research from dead reckoning [1], is a key actor of our system because it allows to calibrate external parameters of the embedded camera. Odometry effectively determines successive positions and orientations of the camera corresponding to image acquisitions. Once this external calibration is established, points of interest (located for example on obstacles) may be tracked in successive acquisitions and may be reconstructed in 3D to locate obstacles in the scene. In parallel, the road in the scene can be represented by a simple generic 3D model and can be textured using visual information from camera acquisitions. Since the fundamental link between odometry and 3D vision is established, the global concept of our system can be detailed. All the hardware equipment installed on our prototype is represented in Figure 2. VGA display

2

CCD camera Acquisition

5

Processing unit

P0

Update P

Pi

DP Vehicle displacement

Pi+1

Results-/+ It, P

Main Thread It, P?

Camera

30ms

Suggest Documents