Cross-model Back-translated Distillation for Unsupervised Machine Translation

Page view(s)
12
Checked on Nov 05, 2023
Cross-model Back-translated Distillation for Unsupervised Machine Translation
Title:
Cross-model Back-translated Distillation for Unsupervised Machine Translation
Journal Title:
International Conference on Machine Learning
DOI:
Publication Date:
24 July 2021
Citation:
Xuan-Phi Nguyen, Shafiq Joty, Thanh-Tung Nguyen, Kui Wu, Ai Ti Aw Proceedings of the 38th International Conference on Machine Learning, PMLR 139:8073-8083, 2021.
Abstract:
Recent unsupervised machine translation (UMT) systems usually employ three main principles: initialization, language modeling and iterative back-translation, though they may apply them differently. Crucially, iterative back-translation and denoising auto-encoding for language modeling provide data diversity to train the UMT systems. However, the gains from these diversification processes have seemed to plateau. We introduce a novel component to the standard UMT framework called Cross-model Back-translated Distillation (CBD), that is aimed to induce another level of data diversification that existing principles lack. CBD is applicable to all previous UMT approaches. In our experiments, CBD achieves the state of the art in the WMT’14 English-French, WMT’16 English-German and English-Romanian bilingual unsupervised translation tasks, with BLEU scores of 38.2, 30.1, and 36.3, respectively. It also yields 1.5 – 3.3 BLEU improvements in IWSLT EnglishFrench and English-German tasks. Through extensive experimental analyses, we show that CBD is effective because it embraces data diversity while other similar variants do not.
License type:
Publisher Copyright
Funding Info:
The research work is supported by the A*STAR Computing and Information Science (ACIS) scholarship, provided by A*STAR.
Description:
ISSN:
2640-3498
Files uploaded:


File Size Format Action
paper.pdf 270.76 KB PDF Open
supplementary.pdf 304.04 KB PDF Open