본문 바로가기
반응형

오늘의 자연어 처리572

[2023-05-16] 오늘의 자연어처리 Prompt Learning to Mitigate Catastrophic Forgetting in Cross-lingual Transfer for Open-domain Dialogue Generation Dialogue systems for non-English languages have long been under-explored. In this paper, we take the first step to investigate few-shot cross-lingual transfer learning (FS-XLT) and multitask learning (MTL) in the context of open-domain dialogue generation for non-English languages wi.. 2023. 5. 16.
[2023-05-15] 오늘의 자연어처리 Towards a Computational Analysis of Suspense: Detecting Dangerous Situations Suspense is an important tool in storytelling to keep readers engaged and wanting to read more. However, it has so far not been studied extensively in Computational Literary Studies. In this paper, we focus on one of the elements authors can use to build up suspense: dangerous situations. We introduce a corpus of texts .. 2023. 5. 15.
[2023-05-14] 오늘의 자연어처리 INGENIOUS: Using Informative Data Subsets for Efficient Pre-Training of Large Language Models A salient characteristic of large pre-trained language models (PTLMs) is a remarkable improvement in their generalization capability and emergence of new capabilities with increasing model capacity and pre-training dataset size. Consequently, we are witnessing the development of enormous models pushing .. 2023. 5. 14.
[2023-05-13] 오늘의 자연어처리 Not All Languages Are Created Equal in LLMs: Improving Multilingual Capability by Cross-Lingual-Thought Prompting Large language models (LLMs) demonstrate impressive multilingual capability, but their performance varies substantially across different languages. In this work, we introduce a simple yet effective method, called cross-lingual-thought prompting (XLT), to systematically improve the mu.. 2023. 5. 13.
반응형