GAN-assisted Road Segmentation from Satellite Imagery

Page view(s)
Checked on Apr 17, 2024
GAN-assisted Road Segmentation from Satellite Imagery
GAN-assisted Road Segmentation from Satellite Imagery
Journal Title:
ACM Transactions on Multimedia Computing, Communications, and Applications
Publication Date:
30 November 2023
Hu, W., Yin, Y., Tan, Y. K., Tran, A., Kruppa, H., & Zimmermann, R. (2023). GAN-assisted Road Segmentation from Satellite Imagery. ACM Transactions on Multimedia Computing, Communications, and Applications.
Geo-information extraction from satellite imagery has become crucial to carry out large-scale ground surveys in a short amount of time. With the increasing number of commercial satellites launched into orbit in recent years, high-resolution RGB color remote sensing imagery has attracted a lot of attention. However, because of the high cost of image acquisition and even more complicated annotation procedures, there are limited high-resolution satellite datasets available. Compared to close-range imagery datasets, existing satellite datasets have a much lower number of images and cover only a few scenarios (cities, background environments, etc. ). They may not be sufficient for training robust learning models that fit all environmental conditions or be representative enough for training regional models that optimize for local scenarios. Instead of collecting and annotating more data, using synthetic images could be another solution to boost the performance of a model. This study proposes a GAN-assisted training scheme for road segmentation from high-resolution RGB color satellite images, which includes three critical components: a) synthetic training sample generation, b) synthetic training sample selection, and c) assisted training strategy. Apart from the GeoPalette and cSinGAN image generators introduced in our prior work, this paper in detail explains how to generate new training pairs using OpenStreetMap (OSM) and introduces a new set of evaluation metrics for selecting synthetic training pairs from a pool of generated samples. We conduct extensive quantitative and qualitative experiments to compare different image generators and training strategies. Our experiments on the downstream road segmentation task show that 1) our proposed metrics are more aligned with the trained model performance compared to commonly used GAN evaluation metrics such as the Fréchet inception distance (FID); and 2) by using synthetic data with the best training strategy, the model performance, mean Intersection over Union (mean IoU), is improved from 60.92% to 64.44%, when 1,000 real training pairs are available for learning, which reaches a similar level of performance as a model that is standard-trained with 4,000 real images (64.59%), i.e. , enabling a 4-fold reduction in real dataset size.
License type:
Publisher Copyright
Funding Info:
This work was funded by the Grab-NUS AI Lab, a joint collaboration between GrabTaxi Holdings Pte. Ltd. and National University of Singapore, and by the Industrial Postgraduate Program (Grant: S18-1198-IPP-II) funded by the Economic Development Board of Singapore.
© Author | ACM 2023. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ACM Transactions on Multimedia Computing, Communications, and Applications,
Files uploaded:

File Size Format Action
acm-tomm-si-gan-assisted-road-seg.pdf 2.68 MB PDF Open