Parallel complement network for real-time semantic segmentation of road scenes
journal contributionposted on 06.01.2021, 17:16 authored by Q Lv, X Sun, C Chen, J Dong, Huiyu Zhou
Real-time semantic segmentation is in intense demand for the application of autonomous driving. Most of the semantic segmentation models tend to use large feature maps and complex structures to enhance the representation power for high accuracy. However, these inefficient designs increase the amount of computational costs, which hinders the model to be applied on autonomous driving. In this paper, we propose a lightweight real-time segmentation model, named Parallel Complement Network(PCNet), to address the challenging task with fewer parameters. A Parallel Complement layer is introduced to generate complementary features with a large receptive field. It provides the ability to overcome the problem of similar feature encoding among different classes, and further produces discriminative representations. With the inverted residual structure, we design a Parallel Complement block to construct the proposed PCNet. Extensive experiments are carried out on challenging road scene datasets, i.e., CityScapes and CamVid, to make comparison against several state-of-the-art real-time segmentation models. The results show that our model has promising performance. Specifically, PCNet* achieves 72.9% Mean IoU on CityScapes using only 1.5M parameters and reaches 79.1 FPS with 1024×2048resolution images on GTX 2080Ti. Moreover, our proposed system achieves the best accuracy when being trained from scratch.