본문 바로가기
반응형

분류 전체보기599

[2023-11-10] 오늘의 자연어처리 SEMQA: Semi-Extractive Multi-Source Question Answering Abstract:Recently proposed long-form question answering (QA) systems, supported by large language models (LLMs), have shown promising capabilities. Yet, attributing and verifying their generated abstractive answers can be difficult, and automatically evaluating their accuracy remains an ongoing challenge. In this work, we introduce a new QA .. 2023. 11. 10.
[2023-11-09] 오늘의 자연어처리 Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment Abstract:Alignment with human preference is a desired property of large language models (LLMs). Currently, the main alignment approach is based on reinforcement learning from human feedback (RLHF). Despite the effectiveness of RLHF, it is intricate to implement and train, thus recent studies explore how to develop alternativ.. 2023. 11. 9.
[2023-11-08] 오늘의 자연어처리 Detecting Agreement in Multi-party Conversational AI Today, conversational systems are expected to handle conversations in multi-party settings, especially within Socially Assistive Robots (SARs). However, practical usability remains difficult as there are additional challenges to overcome, such as speaker recognition, addressee recognition, and complex turn-taking. In this paper, we present our.. 2023. 11. 8.
[2023-11-07] 오늘의 자연어처리 The language of prompting: What linguistic properties make a prompt successful? The latest generation of LLMs can be prompted to achieve impressive zero-shot or few-shot performance in many NLP tasks. However, since performance is highly sensitive to the choice of prompts, considerable effort has been devoted to crowd-sourcing prompts or designing methods for prompt optimisation. Yet, we still l.. 2023. 11. 7.
반응형