Description:
Abstract In this work, we improve the semantic segmentation of multi-layer top-view grid maps in the context of LiDAR-based perception for autonomous vehicles. To achieve this goal, we fuse sequential information from multiple consecutive LiDAR measurements with respect to the driven trajectory of an autonomous vehicle. By doing so, we enrich the multi-layer grid maps which are subsequently used as the input of a neural network. Our approach can be used for LiDAR-only 360 ° surround view semantic scene segmentation while being suitable for real-time critical systems. We evaluate the benefit of fusing sequential information using a dense semantic ground truth and discuss the effect on different classes.