본문 바로가기
반응형

오늘의 자연어 처리572

[2023-08-30] 오늘의 자연어처리 KGConv, a Conversational Corpus grounded in Wikidata We present KGConv, a large, conversational corpus of 71k conversations where each question-answer pair is grounded in a Wikidata fact. Conversations contain on average 8.6 questions and for each Wikidata fact, we provide multiple variants (12 on average) of the corresponding question using templates, human annotations, hand-crafted rules and a.. 2023. 8. 30.
[2023-08-30] 오늘의 자연어처리 Examining User-Friendly and Open-Sourced Large GPT Models: A Survey on Language, Multimodal, and Scientific GPT Models Generative pre-trained transformer (GPT) models have revolutionized the field of natural language processing (NLP) with remarkable performance in various tasks and also extend their power to multimodal domains. Despite their success, large GPT models like GPT-4 face inherent lim.. 2023. 8. 30.
[2023-08-29] 오늘의 자연어처리 Chunk, Align, Select: A Simple Long-sequence Processing Method for Transformers Although dominant in natural language processing, transformer-based models remain challenged by the task of long-sequence processing, because the computational cost of self-attention operations in transformers swells quadratically with the input sequence length. To alleviate the complexity of long-sequence processing.. 2023. 8. 29.
[2023-08-28] 오늘의 자연어처리 Probabilistic Method of Measuring Linguistic Productivity In this paper I propose a new way of measuring linguistic productivity that objectively assesses the ability of an affix to be used to coin new complex words and, unlike other popular measures, is not directly dependent upon token frequency. Specifically, I suggest that linguistic productivity may be viewed as the probability of an affix .. 2023. 8. 28.
반응형