Problem Statement
The advancing development of automated vehicles will have a global impact on society. For the final introduction of automated vehicles on public roads, however, the technological perspective is only one aspect. The replacement of the human driver poses major challenges that go far beyond sensors and actuators. In particular, algorithms will have to make morally difficult decisions to which industry and research have not yet provided answers.
Objective
The aim of this project is to integrate ethical behavior into the trajectory and behavior planning of automated vehicles. Since classical ethical theories, such as deontology or utilitarianism, have decisive disadvantages for real use in automated vehicles, the focus of this project will be on ethics of risk. In cooperation with the TUM Institute for Ethics in Artificial Intelligence, interdisciplinary research questions will be answered. From a technical point of view, a quantification and subsequent distribution of risk will be investigated within the framework of trajectory planning. Ethical aspects, such as the question of a fair distribution of risk, must be taken into account.
Approach
Uncertainties play a major role in the quantification of risk in automated driving. These must first be analyzed and evaluated in order to enable them to be converted into so-called collision probabilities. These collision probabilities and the resulting risk should be taken into account in the planning of the trajectory. In doing so, it is particularly important to minimize the overall risk on the one hand, but also to ensure a fair distribution of a remaining risk. The resulting trajectory planner will be evaluated in an empirical simulation with regard to the target variables safety and efficiency.