본문 바로가기
반응형

분류 전체보기599

[2022-08-20] 오늘의 자연어처리 Neural Embeddings for Text We propose a new kind of embedding for natural language text that deeply represents semantic meaning. Standard text embeddings use the vector output of a pretrained language model. In our method, we let a language model learn from the text and then literally pick its brain, taking the actual weights of the model's neurons to generate a vector. We call this representati.. 2022. 8. 20.
[2022-08-19] 오늘의 자연어처리 QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension Current end-to-end machine reading and question answering (Q\&A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Q\&A architecture called QAN.. 2022. 8. 19.
[2022-08-19] 오늘의 자연어처리 Learning Transductions to Test Systematic Compositionality Recombining known primitive concepts into larger novel combinations is a quintessentially human cognitive capability. Whether large neural models in NLP acquire this ability while learning from data is an open question. In this paper, we look at this problem from the perspective of formal languages. We use deterministic finite-state tran.. 2022. 8. 19.
[2022-08-18] 오늘의 자연어처리 Temporal Concept Drift and Alignment: An empirical approach to comparing Knowledge Organization Systems over time This research explores temporal concept drift and temporal alignment in knowledge organization systems (KOS). A comparative analysis is pursued using the 1910 Library of Congress Subject Headings, 2020 FAST Topical, and automatic indexing. The use case involves a sample of 90 ninetee.. 2022. 8. 18.
반응형