본문 바로가기
반응형

오늘의 자연어 처리572

[2022-08-21] 오늘의 자연어처리 Learning Transductions to Test Systematic Compositionality Recombining known primitive concepts into larger novel combinations is a quintessentially human cognitive capability. Whether large neural models in NLP acquire this ability while learning from data is an open question. In this paper, we look at this problem from the perspective of formal languages. We use deterministic finite-state tran.. 2022. 8. 21.
[2022-08-20] 오늘의 자연어처리 Ask Question First for Enhancing Lifelong Language Learning Lifelong language learning aims to stream learning NLP tasks while retaining knowledge of previous tasks. Previous works based on the language model and following data-free constraint approaches have explored formatting all data as "begin token (\textit{B}) + context (\textit{C}) + question (\textit{Q}) + answer (\textit{A})" for differ.. 2022. 8. 20.
[2022-08-20] 오늘의 자연어처리 Neural Embeddings for Text We propose a new kind of embedding for natural language text that deeply represents semantic meaning. Standard text embeddings use the vector output of a pretrained language model. In our method, we let a language model learn from the text and then literally pick its brain, taking the actual weights of the model's neurons to generate a vector. We call this representati.. 2022. 8. 20.
[2022-08-19] 오늘의 자연어처리 QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension Current end-to-end machine reading and question answering (Q\&A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Q\&A architecture called QAN.. 2022. 8. 19.
반응형