Adaptive-saturated RNN: Remember more with less instability

Page view(s)
40
Checked on Aug 30, 2024
Adaptive-saturated RNN: Remember more with less instability
Title:
Adaptive-saturated RNN: Remember more with less instability
Journal Title:
International Conference on Learning Representations
DOI:
Publication Date:
31 May 2023
Citation:
Nguyen-Duy, Khoi Minh, Quang Pham, and Binh T. Nguyen. Adaptive-saturated RNN: Remember more with less instability. ICLR Tiny Paper (2023).
Abstract:
Orthogonal parameterization is a compelling solution to the vanishing gradient problem (VGP) in recurrent neural networks (RNNs). With orthogonal parameters and non-saturated activation functions, gradients in such models are constrained to unit norms. On the other hand, although the traditional vanilla RNNs are seen to have higher memory capacity, they suffer from the VGP and perform badly in many applications. This work proposes Adaptive-Saturated RNNs (asRNN), a variant that dynamically adjusts its saturation level between the two mentioned approaches. Consequently, asRNN enjoys both the capacity of a vanilla RNN and the training stability of orthogonal RNNs. Our experiments show encouraging results of asRNN on challenging sequence learning benchmarks compared to several strong competitors. The research code is accessible at https://github.com/ndminhkhoi46/asRNN/.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the National Research Foundation - AI Singapore Programme
Grant Reference no. : AISG2-RP-2021-027
Description:
ISBN:
N/A
Files uploaded: