본문 바로가기
반응형

전체 글599

[2023-06-01] 오늘의 자연어처리 Mitigating Label Biases for In-context Learning Various design settings for in-context learning (ICL), such as the choice and order of the in-context examples, can bias the model's predictions. While many studies discuss these design choices, there have been few systematic investigations into categorizing them and mitigating their impact. In this work, we define a typology for three types of lab.. 2023. 6. 1.
[2023-05-31] 오늘의 자연어처리 A Critical Evaluation of Evaluations for Long-form Question Answering Long-form question answering (LFQA) enables answering a wide range of questions, but its flexibility poses enormous challenges for evaluation. We perform the first targeted study of the evaluation of long-form answers, covering both human and automatic evaluation practices. We hire domain experts in seven areas to provide pref.. 2023. 5. 31.
[2023-05-30] 오늘의 자연어처리 NeuroX Library for Neuron Analysis of Deep NLP Models Neuron analysis provides insights into how knowledge is structured in representations and discovers the role of neurons in the network. In addition to developing an understanding of our models, neuron analysis enables various applications such as debiasing, domain adaptation and architectural search. We present NeuroX, a comprehensive open-so.. 2023. 5. 30.
[2023-05-29] 오늘의 자연어처리 Give Me More Details: Improving Fact-Checking with Latent Retrieval Evidence plays a crucial role in automated fact-checking. When verifying real-world claims, existing fact-checking systems either assume the evidence sentences are given or use the search snippets returned by the search engine. Such methods ignore the challenges of collecting evidence and may not provide sufficient information t.. 2023. 5. 29.
반응형