PetalView: Fine-grained Location and Orientation Extraction of Street-view Images via Cross-view Local Search

Page view(s)
79
Checked on Mar 24, 2025
PetalView: Fine-grained Location and Orientation Extraction of Street-view Images via Cross-view Local Search
Title:
PetalView: Fine-grained Location and Orientation Extraction of Street-view Images via Cross-view Local Search
Journal Title:
Proceedings of the 31st ACM International Conference on Multimedia
Keywords:
Publication Date:
27 October 2023
Citation:
Hu, W., Zhang, Y., Liang, Y., Han, X., Yin, Y., Kruppa, H., Ng, S.-K., & Zimmermann, R. (2023). PetalView: Fine-grained Location and Orientation Extraction of Street-view Images via Cross-view Local Search. Proceedings of the 31st ACM International Conference on Multimedia. https://doi.org/10.1145/3581783.3612007
Abstract:
Satellite-based street-view information extraction by cross-view matching refers to a task that extracts the location and orientation information of a given street-view image query by using one or multiple geo-referenced satellite images. Recent work has initiated a new research direction to find accurate information within a local area covered by one satellite image centered at a location prior (e.g., from GPS). It can be used as a standalone solution or complementary step following a large-scale search with multiple satellite candidates. However, these existing works require an accurate initial orientation (angle) prior (e.g., from IMU) and/or do not efficiently search through all possible poses. To allow efficient search and to give accurate prediction regardless of the existence or the accuracy of the angle prior, we present PetalView extractors with multi-scale search. The PetalView extractors give semantically meaningful features that are equivalent across two drastically different views, and the multi-scale search strategy efficiently inspects the satellite image from coarse to fine granularity to provide sub-meter and sub-degree precision extraction. Moreover, when an angle prior is given, we propose a learnable prior angle mixer to utilize this information. Our method obtains the best performance on the VIGOR dataset and successfully improves the performance on KITTI dataset test~1 set with the recall within 1 meter (r@1m) for location estimation to 68.88% and recall within 1 degree (r@1d) 21.10% when no angle prior is available, and with angle prior achieves stable estimations at r@1m and r@1d above 70% and 21%, up to a 40-degree noise level.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the Economic Development Board of Singapore - Industrial Postgraduate Program
Grant Reference no. : S18-1198-IPP-II

This research / project is supported by the GrabTaxi Holdings Pte. Ltd. and National University of Singapore - Grab-NUS AI Lab
Grant Reference no. : N.A

Supported by Guangzhou Municipal Science and Technology Project 2023A03J0011
Description:
© Author | ACM 2023. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of the 31st ACM International Conference on Multimedia, http://dx.doi.org/10.1145/3581783.3612007
ISBN:
979-8-4007-0108-5/23/10
Files uploaded:

File Size Format Action
petalview.pdf 3.48 MB PDF Open