Li, Y., Chen, C., Yan, W., Cheng, Z., Tan, H. L., & Zhang, W. (2023). Cascade Graph Neural Networks for Few-Shot Learning on Point Clouds. IEEE Transactions on Intelligent Transportation Systems, 1–11. https://doi.org/10.1109/tits.2023.3237911
Abstract:
Abstract—Point cloud data, a flexible 3D object representation,
is critical for various applications such as autonomous driving,
robotics and remote sensing. Despite the recent success of deep
neural networks (DNNs) on supervised point cloud analysis tasks,
they still rely on tedious manual annotation of point clouds and
cannot make predictions for new classes. Unlike few-shot learning
for 2D images with the advantages of large-scale datasets and
high-quality deep pre-trained models like ResNet, for 3D fewshot
learning, obtaining discriminative representations of unseen
classes with high intra-class similarity and inter-class difference
is very challenging. To address this issue, this work proposes
a novel cascade graph neural network for few-shot learning on
point clouds, termed as CGNN, in which two cascade GNNs are
adopted to extract the intra-object topological information and
learn the inter-object relations respectively. To further increase
the discriminability of point cloud features, we first design a novel
discriminative edge label to model the intra-class similarity and
inter-class dissimilarity based on channel-wise feature variance
and class consistency. Second, we propose a novel few-shot circle
loss which classifies the nodes into two subsets, i.e., support
to support pairs and support to query pairs, and optimizes
the pair-wise similarity on two subsets independently. Extensive
experiments on benchmark CAD and real LiDAR point cloud
datasets have demonstrated that CGNN improves accuracy by
5.98% over the state-of-the-art GNN-based few-shot classification
methods.
License type:
Publisher Copyright
Funding Info:
This work was supported in part by the Cultivation of Shenzhen Excellent Technological and Innovative Talents (Ph.D. Basic Research Started) under Grant RCBS20200714114943014 and in part by the Basic Research of Shenzhen Science and Technology Plan under Grant JCYJ20210324123802006.