问题解答(QA)是开放域对话代理的重要方面,在对话QA(ConvQA)子任务中获得了特定的研究重点。ConvQA近期工作的一个显着局限性是响应是从目标语料库中提取答案跨度,从而忽略了高质量对话代理的自然语言生成(NLG)方面。在这项工作中,我们提出了一种在SEQ2SEQ NLG方法中定位QA响应的方法,以在保持正确性的同时生成流利的语法答案。从技术角度来看,我们使用数据增强为端到端系统生成训练数据。具体来说,我们开发了句法转换(ST)来生成特定于问题的候选答案并使用基于BERT的分类器对它们进行排名(Devlin等人,2019)。对SQuAD 2.0数据的人工评估(Rajpurkar等人,2018)表明,在生成会话响应时,提出的模型优于基线CoQA和QuAC模型。通过对CoQA数据集进行测试,我们进一步展示了模型的可扩展性。
原文标题:Fluent Response Generation for Conversational Question Answering
原文:Question answering (QA) is an important aspect of open-domain conversational agents, garnering specific research focus in the conversational QA (ConvQA) subtask. One notable limitation of recent ConvQA efforts is the response being answer span extraction from the target corpus, thus ignoring the natural language generation (NLG) aspect of high-quality conversational agents. In this work, we propose a method for situating QA responses within a SEQ2SEQ NLG approach to generate fluent grammatical answer responses while maintaining correctness. From a technical perspective, we use data augmentation to generate training data for an end-to-end system. Specifically, we develop Syntactic Transformations (STs) to produce question-specific candidate answer responses and rank them using a BERT-based classifier (Devlin et al., 2019). Human evaluation on SQuAD 2.0 data (Rajpurkar et al., 2018) demonstrate that the proposed model outperforms baseline CoQA and QuAC models in generating conversational responses. We further show our model's scalability by conducting tests on the CoQA dataset.
原文作者:Ashutosh Baheti, Alan Ritter, Kevin Small
原文地址:https://arxiv.org/abs/2005.10464
Fluent Response Generation for Conversational Question Answering.pdf ---来自腾讯云社区的---刘子蔚
微信扫一扫打赏
支付宝扫一扫打赏