본문 바로가기
반응형

오늘의 자연어 처리572

[2023-06-09] 오늘의 자연어처리 Enhancing In-Context Learning with Answer Feedback for Multi-Span Question Answering Whereas the recent emergence of large language models (LLMs) like ChatGPT has exhibited impressive general performance, it still has a large gap with fully-supervised models on specific tasks such as multi-span question answering. Previous researches found that in-context learning is an effective approach to exp.. 2023. 6. 9.
[2023-06-08] 오늘의 자연어처리 CUE: An Uncertainty Interpretation Framework for Text Classifiers Built on Pre-Trained Language Models Text classifiers built on Pre-trained Language Models (PLMs) have achieved remarkable progress in various tasks including sentiment analysis, natural language inference, and question-answering. However, the occurrence of uncertain predictions by these classifiers poses a challenge to their reli.. 2023. 6. 8.
[2023-06-07] 오늘의 자연어처리 Benchmarking Large Language Models on CMExam -- A Comprehensive Chinese Medical Exam Dataset Recent advancements in large language models (LLMs) have transformed the field of question answering (QA). However, evaluating LLMs in the medical field is challenging due to the lack of standardized and comprehensive datasets. To address this gap, we introduce CMExam, sourced from the Chinese National M.. 2023. 6. 7.
[2023-06-06] 오늘의 자연어처리 Improving Generalization in Task-oriented Dialogues with Workflows and Action Plans Task-oriented dialogue is difficult in part because it involves understanding user intent, collecting information from the user, executing API calls, and generating helpful and fluent responses. However, for complex tasks one must also correctly do all of these things over multiple steps, and in a specific order... 2023. 6. 6.
반응형