Multimodal neural pronunciation modeling for spoken languages with logographic origin

Multimodal neural pronunciation modeling for spoken languages with logographic origin
Title:
Multimodal neural pronunciation modeling for spoken languages with logographic origin
Other Titles:
EMNLP 2018
DOI:
Keywords:
Publication Date:
02 November 2018
Citation:
Abstract:
Graphemes of most languages encode pronunciation, though some are more explicit than others. Languages like Spanish have a straightforward mapping between its graphemes and phonemes, while this mapping is more convoluted for languages like English. Spoken languages such as Cantonese present even more challenges in pronunciation modeling: (1) they do not have a standard written form, (2) the closest graphemic origins are logographic Han characters, of which only a subset of these logographic characters implicitly encodes pronunciation. In this work, we propose a multimodal approach to predict the pronunciation of Cantonese logographic characters, using neural networks with a geometric representation of logographs and pronunciation of cognates in historically related languages. The proposed framework improves performance by 18.1% and 25.0% respective to unimodal and multimodal baselines.
License type:
PublisherCopyrights
Funding Info:
Description:
ISBN:

Files uploaded:
File Size Format Action
There are no attached files.