This repository contains the PyTorch re-implementation of our IROS'2025 paper Open-Set LiDAR Panoptic Segmentation Guided by Uncertainty-Aware Learning. The repository builds on MMDetection3D.
Standard LiDAR panoptic segmentation models operate under a closed-set assumption, meaning they are trained on a fixed set of classes.
This poses a serious limitation for real-world robotics and autonomous driving, where models may confidently misclassify novel objects as a known class, creating significant safety risks.
Open-Set LiDAR Panoptic Segmentation (OSLPS) addresses this challenge with two key goals:
- Correctly segment all known classes : including both “stuff” (e.g., roads) and “things” (e.g., cars).
- Reliably detect and cluster unknown or novel objects that the model has never seen during training.
ULOPS (Uncertainty-Guided Open-Set LiDAR Panoptic Segmentation) introduces a new, principled approach to the OSLPS problem. ULOPS learns to distinguish known from unknown using supervised predictive uncertainty.
Our method is built on two core contributions:
We use Dirichlet-based evidential learning to generate robust uncertainty estimates for each point in the LiDAR space.
We introduce three novel loss functions:
- Uniform Evidence Loss
- Adaptive Separation Loss
- Contrastive Uncertainty Loss
These losses leverage a small set of “unknown” examples during training to actively teach the model to create a clean and sharp boundary between known and unknown regions.
If you find this code useful for your research, we kindly ask you to consider citing our papers:
@article{mohan2025open,
title={Open-Set LiDAR Panoptic Segmentation Guided by Uncertainty-Aware Learning},
shorttile={ULOPS},
author={Mohan, Rohit and Hindel, Julia and Drews, Florian and Gl{\"a}ser, Claudius and Cattaneo, Daniele and Valada, Abhinav},
journal={arXiv preprint arXiv:2506.13265},
year={2025}
}
Work in Progress
This version provides the foundational codebase for ULOPS, including the main framework structure and placeholders for upcoming modules.
Expect incremental updates with full training, evaluation, and benchmarking support.
# Training
bash tools/dist_train.sh /path/to/your/config 8
# Inference
bash tools/dist_test.sh /path/to/your/config /path/to/your/checkpoint.pth 8 --eval bboxPre-trained models can be found in the model zoo.
We have used utility functions from other open-source projects. We espeicially thank the authors of:

