While neural approaches have achieved significant improvement in machine comprehension tasks, models often work as a black-box, resulting in lower interpretability, which requires special attention in domains such as healthcare or education. Quantifying uncertainty helps pave the way towards more interpretable neural networks. In classification and regression tasks, Bayesian neural networks have been effective in estimating model uncertainty. However, inference time increases linearly due to the required sampling process in Bayesian neural networks. Thus speed becomes a bottleneck in tasks with high system complexity such as question-answering or dialogue generation. In this work, we propose a hybrid neural architecture to quantify model uncertainty using Bayesian weight approximation but boosts up the inference speed by 80% relative at test time, and apply it for a clinical dialogue comprehension task. The proposed approach is also used to enable active learning so that an updated model can be trained more optimally with new incoming data by selecting samples that are not well-represented in the current training scheme.
Research efforts were supported by funding and infrastructure for Digital Health from the Institute for Infocomm Research (I2R), Science and Engineering Research Council, A*STAR, Singapore (Grant Nos SSF A1818g0044, IAF H19/01/a0/023). The clinical data acquisition was funded by the Economic Development Board (EDB), Singapore Living Lab Fund, and Philips Electronics Hospital to Home Pilot Project (EDB grant reference number: S14-1035-RF-LLF H and W).