본문 바로가기
반응형

분류 전체보기599

[2023-11-06] 오늘의 자연어처리 Can Language Models Be Tricked by Language Illusions? Easier with Syntax, Harder with Semantics Language models (LMs) have been argued to overlap substantially with human beings in grammaticality judgment tasks. But when humans systematically make errors in language processing, should we expect LMs to behave like cognitive models of language and mimic human behavior? We answer this question by i.. 2023. 11. 6.
[2023-11-05] 오늘의 자연어처리 Revisiting the Knowledge Injection Frameworks In recent years, large language models (LLMs), such as GPTs, have attained great impact worldwide. However, how to adapt these LLMs to better suit the vertical domain-specific tasks by utilizing external knowledge remains not completely solved. Indeed, there have emerged a few works on this line where most of them rely on an alignment heuristic that .. 2023. 11. 5.
[2023-11-04] 오늘의 자연어처리 People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection Abstract:NLP models are used in a variety of critical social computing tasks, such as detecting sexist, racist, or otherwise hateful content. Therefore, it is imperative that these models are robust to spurious features. Past work has attempted to tackle such spurious .. 2023. 11. 4.
[2023-11-03] 오늘의 자연어처리 Are Large Language Models Reliable Judges? A Study on the Factuality Evaluation Capabilities of LLMs Abstract:In recent years, Large Language Models (LLMs) have gained immense attention due to their notable emergent capabilities, surpassing those seen in earlier language models. A particularly intriguing application of LLMs is their role as evaluators for texts produced by various generative mod.. 2023. 11. 3.
반응형