Abstract
This topic deals with semantic understanding of challenging road scenes such as construction zones that lead to unexpected situations for driving assist and self-driving technologies. Semantic information could be useful metadata, such as how many lanes are available if the equipment is blocking certain lanes or the number of people on the scene.
- Project Goals:
- Identifying Road Conditions(RC) via semantic understanding: Parsing expected and unexpected/rare RCs, objects, and layouts
- Task Formulation:
- Frame-wise RC identification and localization by point-level semantic segmentation and object-level detection
- Improving the robustness of known points and objects
- Parsing unknown points and objects

Current Network:
2DPASS: 3D Segmentation on LiDAR Point Clouds Assisted by 2D Priors, ECCV, 2022

Currently, we’re conducting two experiments. The first one involves outputting frame-wise RC directly from the model, and the second one deals with localization through point-level semantic segmentation and object-level detection.
Regarding the first experiment, we’re in the process of determining the output representation for each road condition. We’ve been attempting to output the index of RC by adding the classifier in the last layer.
As for the second experiment, we’re striving to integrate the detection model into this network to obtain object-level information. To differentiate between known and unknown objects, we need to train the model to comprehend the scene in object-level rather than points-level.