Pure Transformer with Integrated Experts for Scene Text Recognition

Page view(s)
Checked on Apr 15, 2024
Pure Transformer with Integrated Experts for Scene Text Recognition
Pure Transformer with Integrated Experts for Scene Text Recognition
Journal Title:
Lecture Notes in Computer Science
Publication Date:
20 October 2022
Tan, Y. L., Kong, A. W.-K., & Kim, J.-J. (2022). Pure Transformer with Integrated Experts for Scene Text Recognition. Computer Vision – ECCV 2022, 481–497. https://doi.org/10.1007/978-3-031-19815-1_28
Scene text recognition (STR) involves the task of reading text in cropped images of natural scenes. Conventional models in STR employ convolutional neural network (CNN) followed by recurrent neural network in an encoder-decoder framework. In recent times, the transformer architecture is being widely adopted in STR as it shows strong capability in capturing long-term dependency which appears to be prominent in scene text images. Many researchers utilized transformer as part of a hybrid CNN-transformer encoder, often followed by a transformer decoder. However, such methods only make use of the long-term dependency mid-way through the encoding process. Although the vision transformer (ViT) is able to capture such dependency at an early stage, its utilization remains largely unexploited in STR. This work proposes the use of a transformer-only model as a simple baseline which outperforms hybrid CNN-transformer models. Furthermore, two key areas for improvement were identified. Firstly, the first decoded character has the lowest prediction accuracy. Secondly, images of different original aspect ratios react differently to the patch resolutions while ViT only employ one fixed patch resolution. To explore these areas, Pure Transformer with Integrated Experts (PTIE) is proposed. PTIE is a transformer model that can process multiple patch resolutions and decode in both the original and reverse character orders. It is examined on 7 commonly used benchmarks and compared with over 20 state-of-the-art methods. The experimental results show that the proposed method outperforms them and obtains state-of-the-art results in most benchmarks.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the Nanyang Technological University (NTU) - NTU Internal Funding - Accelerating Creativity and Excellence
Grant Reference no. : NTU–ACE2020-03
This version of the article has been accepted for publication, after peer review and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: https://doi.org/10.1007/978-3-031-19815-1_28
Files uploaded:

File Size Format Action
136880476.pdf 635.37 KB PDF Open