본문 바로가기
반응형

오늘의 자연어 처리572

[2023-12-18] 오늘의 자연어처리 Zebra: Extending Context Window with Layerwise Grouped Local-Global Attention Abstract:This paper introduces a novel approach to enhance the capabilities of Large Language Models (LLMs) in processing and understanding extensive text sequences, a critical aspect in applications requiring deep comprehension and synthesis of large volumes of information. Recognizing the inherent challenges in exten.. 2023. 12. 18.
[2023-12-17] 오늘의 자연어처리 ChatGPT for Arabic Grammatical Error Correction Abstract:Recently, large language models (LLMs) fine-tuned to follow human instruction have exhibited significant capabilities in various English NLP tasks. However, their performance in grammatical error correction (GEC) tasks, particularly in non-English languages, remains significantly unexplored. In this paper, we delve into abilities of instru.. 2023. 12. 17.
[2023-12-16] 오늘의 자연어처리 ChatGPT for Arabic Grammatical Error Correction Abstract:Recently, large language models (LLMs) fine-tuned to follow human instruction have exhibited significant capabilities in various English NLP tasks. However, their performance in grammatical error correction (GEC) tasks, particularly in non-English languages, remains significantly unexplored. In this paper, we delve into abilities of instru.. 2023. 12. 16.
[2023-12-13] 오늘의 자연어처리 Exploiting Representation Bias for Data Distillation in Abstractive Text Summarization Abstract:Abstractive text summarization is surging with the number of training samples to cater to the needs of the deep learning models. These models tend to exploit the training data representations to attain superior performance by improving the quantitative element of the resultant summary. However, increa.. 2023. 12. 13.
반응형