Evaluating Code-Switching Translation with Large Language Models

Page view(s)
Checked on Jun 12, 2024
Evaluating Code-Switching Translation with Large Language Models
Evaluating Code-Switching Translation with Large Language Models
Journal Title:
Publication Date:
22 May 2024
Muhammad Huzaifah, Weihua Zheng, Nattapol Chanpaisit, and Kui Wu. 2024. Evaluating Code-Switching Translation with Large Language Models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 6381–6394, Torino, Italia. ELRA and ICCL.
Recent advances in large language models (LLMs) have shown they can match or surpass finetuned models on many natural language processing tasks. Currently, more studies are being carried out to assess whether this performance carries over across different languages. In this paper, we present a thorough evaluation of LLMs for the less well-researched code-switching translation setting, where inputs include a mixture of different languages. We benchmark the performance of six state-of-the-art LLMs across seven datasets, with GPT-4 and GPT-3.5 displaying strong ability relative to supervised translation models and commercial engines. GPT-4 was also found to be particularly robust against different code-switching conditions. Several methods to further improve code-switching translation are proposed including leveraging in-context learning and pivot translation. Through our code-switching experiments, we argue that LLMs show promising ability for cross-lingual understanding.
License type:
Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)
Funding Info:
This research / project is supported by the National Research Foundation / Ministry of Communications and Information - Online Trust and Safety (OTS) Research Programme
Grant Reference no. : MCI-OTS-001