Investigative Ophthalmology & Visual Science September 2016, Vol.57, 5969
Abstract:
Purpose
To introduce an automated system that detects key features from multi-modal retinal images and mosaics them.
Methods
We develop an automated system in project ARTEMIS to automatically extract key features points and match them. We developed a low-dimensional step pattern analysis (LoSPA) feature which is rotation invariance and scale insensitivity. This LoSPA feature is used in our system to perform retinal image matching. The Euclidean distance is used to determine the similar points between two images based on LoSPA, and evaluated using a search algorithm in k-dimension space, where k=3.
Results
The system was tested using color fundus images and corresponding fluorescein angiographic images from 120 subjects from the National Healthcare Group Eye Institute. We run comparative experiments between 7 algorithms, comparing our proposed LoSPA128 and LoSPA58 with 5 other commonly used features: SIFT, GDB-ICP, ED-DB-ICP, UR-SIFT-PIIFD, Harris-PIIFD. In subjects with mild-to-moderate retinal diseases, the best success rates are from LoSPA-86 (93.33%), LoSPA-58 (90%) and Harris-PIIFD (90%). For subjects with severe retinal diseases, the best success rates are from LoSPA-86 (79.17%), LoSPA-58 (66.67%) and Harris-PIIFD (41.67%).
Conclusions
An automated system that detects key features from multi-modal retinal images is tested. Experimental results show that its performance is promising, showing good potential for the system to be used to mosaic multi-modal images.
License type:
PublisherCopyrights
Funding Info:
Description:
Full abstract can be found from the Publisher's URL provided.