Autonomous Vehicle Perception: Sensor Fusion based on Structured Learning Methods
Project Completed - Contact: ftm(at)ftm.mw.tum.de
Motivation
Perception of the environment is a crucial task in the pipeline to enable autonomous driving. By facilitating the perceptional sensors (camera, lidar and radar) a vehicle is able to localize it self inside a static environment map. The vehicle needs to detect and classify the traffic participants in its surroundings in order to navigate safely. Real-time applicability is especially crucial in highly dynamic scenarios for example on a race track or in a emergency maneuver on the road. As the different sensors possess individual strengths and weaknesses, the fusion of the signals can facilitate a higher detection quality. In this research project the possible benefits of a low level fusion of different sensors shall be evaluated.
Objective
In this research, a fusion strategy of the camera, lidar and radar sensors shall be evaluated. The object detection quality shall be augmented, the focus is on adverse weather conditions where the mono camera is less reliable suche as heavy rain or blinding of the camera. The algorithm is evaluated with the sensor system of a test vehicle at the Chair. Furthermore, it shall be evaluated in an autonomous racing environment to test the real life applicability. In this scenario, further requirements regarding safety and reliability need to be fulfilled to avoid collisions.
Realization
Motivated by the advances of object detection on images with deep learning techniques, a fusion based on these methods is researched. A challenge lies in the spatial and temporal synchronisation of the sensor data. Due to the different characteristics of dense camera and sparse radar data, further preprocessing steps are evaluated to employ such a fusion. The fusion result shall be augmented by incorporating the physics of the detected objects into the detection and tracking process.