Zhao, X., Wu, X., Chen, W., Chen, P. C. Y., Xu, Q., & Li, Z. (2023). ALIKED: A Lighter Keypoint and Descriptor Extraction Network via Deformable Transformation. IEEE Transactions on Instrumentation and Measurement, 72, 1–16. https://doi.org/10.1109/tim.2023.3271000
Abstract:
Image keypoints and descriptors play a crucial role
in many visual measurement tasks. In recent years, deep neural
networks have been widely used to improve the performance of
keypoint and descriptor extraction. However, the conventional
convolution operations do not provide the geometric invariance
required for the descriptor. To address this issue, we propose the
Sparse Deformable Descriptor Head (SDDH), which learns the
deformable positions of supporting features for each keypoint and
constructs deformable descriptors. Furthermore, SDDH extracts
descriptors at sparse keypoints instead of a dense descriptor
map, which enables efficient extraction of descriptors with strong
expressiveness. In addition, we relax the neural reprojection error
(NRE) loss from dense to sparse to train the extracted sparse
descriptors. Experimental results show that the proposed network
is both efficient and powerful in various visual measurement
tasks, including image matching, 3D reconstruction, and visual
relocalization.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the A*STAR - Robotics Horizontal Technology Coordinating Office (HTCO)
Grant Reference no. : C221518005
This work was supported by the National Nature Science Foundation of
China under Grant No. 61620106012, the Key Research and Development Program of Zhejiang Province under Grant No. 2020C01109