Software
Our lab is dedicated to developing new software that solves real-world problems for autonomous vehicles, with the goal of making it available to everyone. We are proud to be part of the open-source community, where knowledge is freely shared and collaboration drives progress.
On this page, you will find a comprehensive list of all the software our lab has developed and published. From state-of-the-art tools for data analysis to simulations for autonomous vehicles to algorithms that enable safe and trustworthy autonomy for a wide range of highly integrated autonomous vehicle applications.
Visit our Github Page
This repository contains a motion planning framework for autonomous vehicles that extends the Frenetix trajectory planner with occlusion-aware and risk-aware planning features to help vehicles navigate safely in complex urban environments with limited visibility and potential hidden hazards, including tools and configurations for simulation and evaluation.
It contains the code and data for a framework that uses Large Language Models (LLMs) to evaluate and generate safety-critical driving scenarios for autonomous vehicles.
This repository provides the necessary drivers to operate the F1Tenth/RoboRacer cars equipped with the Livox MID-360 3D LiDAR sensor (Livox MID-360 product page). Additionally, we include a robust and tested 3D LiDAR SLAM package specifically validated for use with this LiDAR.
For the localization and position estimation of autonomous vehicles, we developed a novel 3D point-mass-based extended Kalman filter (EKF). By fusing different information, we can estimate the position, orientation and speed very accurately and thus ensure highly accurate state estimation even in 3D environments. This repository contains the code for 3D state estimation.
We build a new dataset, which aims at the long-term prediction with the inconspicuous state variation in history for the emergency event, named the Extro-Spective Prediction (ESP) problem. The ESP-Dataset with semantic environment information is collected over 2k+ kilometers focusing on emergency-event-based challenging scenarios.
A digital twin is a virtual representation of a physical object or system. In the case of our repository, the digital twin serves as a digital counterpart of our autonomous research vehicle. It captures the vehicle's behavior, performance, and characteristics in a virtual environment. With this information you can simulate the EDGAR vehicle in various 2D and 3D simulation environments.
Autonomous vehicles require accurate and robust localization and mapping algorithms to navigate safely and reliably in urban environments. We present a novel sensor fusion-based pipeline for offline mapping and online localization based on LiDAR sensors. The proposed approach leverages four LiDAR sensors. Mapping and localization algorithms are based on the KISS-ICP, enabling real-time performance and high accuracy.
Autonomous vehicles need a dedicated software module that allows for safe and efficient trajectory calculation. We release here a Frenet trajectory planning algorithm that generates trajectories in a sampling-based approach. The Repo provides a python-based and a C++-accelerated motion planner that sample polynomial trajectories in a curvilinear coordinate system and evaluate them for feasibility and cost.
ROS2 software systems usually consist of a high number of publisher and subscriber modules. Our software tool creates a data flow graph (DFG) for a ROS2 software for autonomous driving using static code analysis. This allows IData dependencies of a C++-based ROS2 system to be found and visualized more easily without manual annotation.
Since cameras output only 2D images and active sensors such as LiDAR or RADAR produce sparse depth measurements, information-rich depth maps must be estimated. In this software, we combine the potential of visual transformers by fusing images from a monocamera, semantic segmentations, and the projections from radar sensors for robust monocular depth estimation.
In this work, we add RGB-L (LiDAR) mode to the well-known ORB-SLAM3. This allows us to integrate LiDAR depth measurements directly into the visual SLAM. In this way, we obtain an accuracy that is extremely close to a stereo camera. At the same time, we were able to greatly reduce the calculation times.










