Zou, X., Li, K., Li, Y., Wei, W., & Chen, C. (2022). Multi-Task Y-Shaped Graph Neural Network for Point Cloud Learning in Autonomous Driving. IEEE Transactions on Intelligent Transportation Systems, 23(7), 9568–9579. https://doi.org/10.1109/tits.2022.3150155
Point cloud, an efficient 3D object representation, plays an indispensable role in autonomous driving technologies,
such as object avoidance, localization, and map building. The analysis of point clouds (e.g., 3D segmentation) is essential to
exploit the informative value of point clouds for such applications. The main challenge remain in efficiently and completely
extracting high-level point cloud feature representations. To this end, we present a novel multi-task Y-shaped graph neural
network to explore 3D point clouds, referred to as MTYGNN. By extending the conventional U-Net, MTYGNN contains two
main branches to perform classification and segmentation tasks in point clouds simultaneously. Meanwhile, the classification prediction is fused together to the semantic features as the scene context to make per-point semantic predictions more accurate. Furthermore, we consider the homoscedastic uncertainty of each task to calculate the weight multiple loss functions to ensure that tasks do not negatively interfere with each other. The proposed MTYGNN is evaluated on popular point cloud datasets in traffic scenarios. Experimental results demonstrate that our framework outperforms the state-of-the-art baseline methods.
This work was supported in part by the National Key Research and Development Programs of China under Grant 2020YFB2104000, in part by the Cultivation of Shenzhen Excellent Technological and Innovative Talents (Ph.D. Basic Research Started) under Grant RCBS20200714114943014, in part by the Basic Research of Shenzhen Science and Technology Plan under Grant JCYJ20210324123802006, and
in part by the National Natural Science Foundation of China under Grant 61902120.