본문 바로가기
반응형

페이퍼572

[2023-11-30] 오늘의 자연어처리 CDEval: A Benchmark for Measuring the Cultural Dimensions of Large Language Models Abstract:As the scaling of Large Language Models (LLMs) has dramatically enhanced their capabilities, there has been a growing focus on the alignment problem to ensure their responsible and ethical use. While existing alignment efforts predominantly concentrate on universal values such as the HHH principle, the as.. 2023. 11. 30.
[2023-11-29] 오늘의 자연어처리 YUAN 2.0: A Large Language Model with Localized Filtering-based Attention Abstract:In this work, the Localized Filtering-based Attention (LFA) is introduced to incorporate prior knowledge of local dependencies of natural language into Attention. Based on LFA, we develop and release Yuan 2.0, a large language model with parameters ranging from 2.1 billion to 102.6 billion. A data filtering and ge.. 2023. 11. 29.
[2023-11-28] 오늘의 자연어처리 Exploring Methods for Cross-lingual Text Style Transfer: The Case of Text Detoxification Abstract:Text detoxification is the task of transferring the style of text from toxic to neutral. While here are approaches yielding promising results in monolingual setup, e.g., (Dale et al., 2021; Hallinan et al., 2022), cross-lingual transfer for this task remains a challenging open problem (Moskovskiy et.. 2023. 11. 28.
[2023-11-27] 오늘의 자연어처리 Detecting out-of-distribution text using topological features of transformer-based language models Abstract:We attempt to detect out-of-distribution (OOD) text samples though applying Topological Data Analysis (TDA) to attention maps in transformer-based language models. We evaluate our proposed TDA-based approach for out-of-distribution detection on BERT, a transformer-based language model, and.. 2023. 11. 27.
반응형