본문 바로가기
반응형

분류 전체보기599

[2023-10-28] 오늘의 자연어처리 InstOptima: Evolutionary Multi-objective Instruction Optimization via Large Language Model-based Instruction Operators Abstract:Instruction-based language modeling has received significant attention in pretrained language models. However, the efficiency of instruction engineering remains low and hinders the development of instruction studies. Recent studies have focused on automating instruction.. 2023. 10. 28.
[2023-10-27] 오늘의 자연어처리 DISCO: A Large Scale Human Annotated Corpus for Disfluency Correction in Indo-European Languages Disfluency correction (DC) is the process of removing disfluent elements like fillers, repetitions and corrections from spoken utterances to create readable and interpretable text. DC is a vital post-processing step applied to Automatic Speech Recognition (ASR) outputs, before subsequent processing b.. 2023. 10. 27.
[2023-10-26] 오늘의 자연어처리 MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning Abstract:While large language models (LLMs) equipped with techniques like chain-of-thought prompting have demonstrated impressive capabilities, they still fall short in their ability to reason robustly in complex settings. However, evaluating LLM reasoning is challenging because system capabilities continue to grow while .. 2023. 10. 26.
[2023-10-25] 오늘의 자연어처리 Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models Abstract:With large language models (LLMs) poised to become embedded in our daily lives, questions are starting to be raised about the dataset(s) they learned from. These questions range from potential bias or misinformation LLMs could retain from their training data to questions of copyright and fair u.. 2023. 10. 25.
반응형