Welcome to CSML



UU
UM
UMM
Road segmentation is crucial for autonomous driving and sophisticated driver assistance systems to comprehend the driving environment. Recent years have seen significant advancements in road segmentation thanks to the advent of deep learning. Inaccurate road boundaries and lighting fluctuations like shadows and overexposed zones are still issues. In this project, we focus on the topic of visual road classification, in which we are given a picture and asked to label each pixel as containing either a road or a non-road. We tackle this task using FAPNET, a recently suggested convolutional neural network architecture. To improve its performance, the proposed approach makes use of a NAP augmentation module. The experimental results show that the suggested method achieves higher segmentation accuracy than the state-of-the-art methods on the KITTI road detection benchmark datasets.
The KITTI Visual Benchmark Suite is a dataset that has been developed specifically for the purpose of benchmarking optical flow, odometry data, object detection, and road/lane detection. The dataset can be downloaded from here. The road dataset has a dimension of 375 by 1242 pixels and contains 600 individual frames. It is the primary benchmark dataset for road and lane segmentation. This benchmark was developed in partnership with Jannik Fritsch and Tobias Kuehnl of Honda Research Institute Europe GmbH. The road and lane estimate benchmark includes 290 test and 289 training images; including three distinct types of road scenes, which are given below. Figure shows some example data (UU, UM, and UMM) plotted by MatPlotLib in RGB format.
UU: Urban Unmarked
UM: Urban Marked
UMM: Urban Multiple Marked Lanes