본문 바로가기
반응형

전체 글599

[2023-09-30] 오늘의 자연어처리 Human Feedback is not Gold Standard Abstract:Human feedback has become the de facto standard for evaluating the performance of Large Language Models, and is increasingly being used as a training objective. However, it is not clear which properties of a generated output this single `preference' score captures. We hypothesise that preference scores are subjective and open to undesirable biases. We.. 2023. 9. 30.
[2023-09-29] 오늘의 자연어처리 How We Define Harm Impacts Data Annotations: Explaining How Annotators Distinguish Hateful, Offensive, and Toxic Comments Abstract:Computational social science research has made advances in machine learning and natural language processing that support content moderators in detecting harmful content. These advances often rely on training datasets annotated by crowdworkers for harmful content. In .. 2023. 9. 29.
[2023-09-28] 오늘의 자연어처리 Program Repair with Minimal Edits Using CodeT5 Abstract:Programmers often struggle to identify and fix bugs in their programs. In recent years, many language models (LMs) have been proposed to fix erroneous programs and support error recovery. However, the LMs tend to generate solutions that differ from the original input programs. This leads to potential comprehension difficulties for users. In.. 2023. 9. 28.
[2023-09-27] 오늘의 자연어처리 PRiSM: Enhancing Low-Resource Document-Level Relation Extraction with Relation-Aware Score Calibration Abstract:Document-level relation extraction (DocRE) aims to extract relations of all entity pairs in a document. A key challenge in DocRE is the cost of annotating such data which requires intensive human effort. Thus, we investigate the case of DocRE in a low-resource setting, and we find that.. 2023. 9. 27.
반응형