본문 바로가기
반응형

오늘의 자연어 처리572

[2023-10-20] 오늘의 자연어처리 Evaluating the Symbol Binding Ability of Large Language Models for Multiple-Choice Questions in Vietnamese General Education Abstract:In this paper, we evaluate the ability of large language models (LLMs) to perform multiple choice symbol binding (MCSB) for multiple choice question answering (MCQA) tasks in zero-shot, one-shot, and few-shot settings. We focus on Vietnamese, with fewer challengin.. 2023. 10. 20.
[2023-10-19] 오늘의 자연어처리 Utilizing Weak Supervision To Generate Indonesian Conservation Dataset Abstract:Weak supervision has emerged as a promising approach for rapid and large-scale dataset creation in response to the increasing demand for accelerated NLP development. By leveraging labeling functions, weak supervision allows practitioners to generate datasets quickly by creating learned label models that produce soft-.. 2023. 10. 19.
[2023-10-18] 오늘의 자연어처리 In-Context Pretraining: Language Modeling Beyond Document Boundaries Abstract:Large language models (LMs) are currently trained to predict tokens given document prefixes, enabling them to directly perform long-form generation and prompting-style tasks which can be reduced to document completion. Existing pretraining pipelines train LMs by concatenating random sets of short documents to create in.. 2023. 10. 18.
[2023-10-17] 오늘의 자연어처리 CAMELL: Confidence-based Acquisition Model for Efficient Self-supervised Active Learning with Label Validation Abstract:Supervised neural approaches are hindered by their dependence on large, meticulously annotated datasets, a requirement that is particularly cumbersome for sequential tasks. The quality of annotations tends to deteriorate with the transition from expert-based to crowd-sourced la.. 2023. 10. 17.
반응형