Traffic Light and Vehicle Signal Recognition with High Dynamic Range Imaging and Deep Learning

Traffic Light and Vehicle Signal Recognition with High Dynamic Range Imaging and Deep Learning
Title:
Traffic Light and Vehicle Signal Recognition with High Dynamic Range Imaging and Deep Learning
Other Titles:
Deep Learning: Algorithms and Applications
Publication Date:
24 October 2019
Citation:
Wang JG., Zhou LB. (2020) Traffic Light and Vehicle Signal Recognition with High Dynamic Range Imaging and Deep Learning. In: Pedrycz W., Chen SM. (eds) Deep Learning: Algorithms and Applications. Studies in Computational Intelligence, vol 865. Springer, Cham
Abstract:
Use of autonomous vehicles aims to eventually reduce the number of motor vehicle fatalities caused by humans. Deep learning plays an important role in making this possible because it can leverage the huge amounts of training data that comes from autonomous car sensors. Automatic recognition of traffic light and vehicle signal is a perception module critical to autonomous vehicles because a deadly car accident could happen if a vehicle fails to follow traffic lights or vehicle signals. A practical Traffic Light Recognition (TLR) or Vehicle Signal Recognition (VSR) faces some challenges, including varying illumination conditions, false positives and long computation time. In this chapter, we propose a novel approach to recognize traffic light (TL) and Vehicle Signal (VS) with high dynamic imaging and deep learning in real-time. Different from existing approaches which use only bright images, we use both bright and dark images provided by a high dynamic range camera. TL candidates can be detected robustly from low exposure/dark frames because they have a clean dark background. The TL candidates on the consecutive high exposure/bright frames are then classified accurately using a convolutional neural network. The dual-channel mechanism can achieve promising results because it uses undistorted color and shape information of low exposure/dark frames as well as rich texture of high exposure/bright frames. Furthermore, the TLR performance is boosted by incorporating a temporal trajectory tracking method. To speed up the process, a region of interest is generated to reduce the search regions for the TL candidates. The experimental results on a large dual-channel database have shown that our dual-channel approach outperforms the state of the art which uses only bright images. Encouraged by the promising performance of the TLR, we extend the dual-channel approach to vehicle signal recognition. The algorithm reported in this chapter has been integrated into our autonomous vehicle via Data Distribute Service (DDS) and works robustly in real roads.
License type:
PublisherCopyrights
Funding Info:
A*STAR Grant for Autonomous Systems Project, Singapore
Description:
ISBN:
978-3-030-31759-1
978-3-030-31760-7
Files uploaded:
File Size Format Action
There are no attached files.