J. H. M. Wong, H. Zhang, and N. F. Chen, “Multiple output samples per input in a single-output Gaussian process”, in proc. Symposium for Celebrating 40 Years of Bayesian Learning in Speech and Language Processing and Beyond, Dec 2023
Abstract:
The standard Gaussian Process (GP) only considers a single output sample per input in the training set. Datasets for subjective tasks, such as spoken language assessment, may be annotated with output labels from multiple human raters per input. This paper proposes to generalise the GP to allow for these multiple output samples in the training set, and thus make use of available output uncertainty information. This differs from a multi-output GP, as all output samples are from the same task here. The output density function is formulated to be the joint likelihood of observing all output samples, and latent variables are not repeated to reduce computation cost. The test set predictions are inferred similarly to a standard GP, with a difference being in the optimised hyper-parameters. This is evaluated on speechocean762, showing that it allows the GP to compute a test set output distribution that is more similar to the collection of reference outputs from the multiple human raters.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the AI3 - EVA
Grant Reference no. : SC20/21-816400