Joint Learning of Word and Label Embeddings for Sequence Labelling in Spoken Language Understanding

Joint Learning of Word and Label Embeddings for Sequence Labelling in Spoken Language Understanding
Title:
Joint Learning of Word and Label Embeddings for Sequence Labelling in Spoken Language Understanding
Other Titles:
ASRU 2019 - IEEE Automatic Speech Recognition and Understanding Workshop
DOI:
Keywords:
Publication Date:
01 December 2019
Citation:
Abstract:
We propose an architecture to jointly learn word and label embeddings for slot filling in spoken language understanding. The proposed approach encodes labels using a combination of word embeddings and straightforward word-label association from the training data. Compared to the state-of-the-art methods, our approach does not require label embeddings as part of the input and therefore lends itself nicely to a wide range of model architectures. In addition, our architecture computes contextual distances between words and labels to avoid adding contextual windows, thus reducing memory footprint. We validate the approach on established spoken dialogue datasets and show that it can achieve state-of-the-art performance with much fewer trainable parameters.
License type:
Funding Info:
This research is supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project #A18A2b0046) and the Spanish AMIC project (TIN2017-85854-C4-4-R).
Description:
ISBN:

Files uploaded:

File Size Format Action
jiewenwu-asru2019.pdf 253.81 KB PDF Open