Authors: Nguyen Thach Ha Anh, FPT University Nguyen Quoc Trung, FPT University Nguyen Van Tien, Pythera AI Pham Trung Hieu, Pythera AIVinh Truong Hoang, Ho Chi Minh City Open University Le-Viet Tuan, Ho Chi Minh City Open University
Scale up the large language models to store vast amounts of knowledge within their parameters incur higher costs and training times. Thus, in this study, we aim to examine the effects of language models enhancing external knowledge and compare the performance of extractive and abstractive generation tasks in building the question-answering system. To ensure consistency in our evaluations, we modified the MS MARCO and MASH-QA datasets by filtering irrelevant support documents and enhancing contextual relevance by mapping the input question to the closest supported documents in our database setup. Finally, we materiality assess the performance in the health domain, our experience presents a promising result not only with information retrieval but also with retrieval augmentation tasks aimed at improving performance for future work.
Keywords: Extractive generation,Abstractive generation,Knowledge-based Question-Answering
Published in: IEEE Transactions on Antennas and Propagation( Volume: 71, Issue: 4, April 2023)
Page(s): 2908 - 2921
Date of Publication: 2908 - 2921
DOI: 10.1109/TAP.2023.3240032
Publisher: UNITED SOCIETIES OF SCIENCE