Problem Statement
Nowadays, neural networks play an increasingly important role for the detection of road users in the vicinity of an autonomous vehicle. The so-called supervised training of these neural networks requires large labeled data sets, the creation of which is time-consuming and expensive. Simulation data can be a practical alternative, however, neural networks trained with synthetic data do not achieve a performance comparable to that of neural networks trained with real data. This is the so-called reality gap.
Objective
Within the scope of this research project, the reality gap is to be examined and methods are to be found by means of which the reality gap can be minimized or closed. Specifically, the reality gap is to be quantified using the example of LiDAR point clouds by systematically comparing real and synthetic point clouds. The aim of this research project is to develop a method that enables an adaptation of synthetic point clouds in real point clouds. Methods of deep learning are to be used. At the end of the research project, a method should be developed that enables a generalistic adaptation of point clouds from different domains.
Realization
In the first step of this research project, the reality gap is quantified. For this purpose, real point clouds are recorded with real LiDAR sensors and the same scenarios are simulated in a simulation environment in order to generate synthetic point clouds. Neural networks for object detection are trained with the recorded real and synthetic data sets and evaluated using appropriate metrics. The second step is the development of a method for adapting synthetic to real point clouds. For this purpose, different network architectures are examined and combined, which enable encoding, adapting and decoding of point clouds.