반응형 자연어처리572 [2022-08-27] 오늘의 자연어처리 Bitext Mining for Low-Resource Languages via Contrastive Learning Mining high-quality bitexts for low-resource languages is challenging. This paper shows that sentence representation of language models fine-tuned with multiple negatives ranking loss, a contrastive objective, helps retrieve clean bitexts. Experiments show that parallel data mined from our approach substantially outperform the pre.. 2022. 8. 27. [2022-08-26] 오늘의 자연어처리 Bitext Mining for Low-Resource Languages via Contrastive Learning Mining high-quality bitexts for low-resource languages is challenging. This paper shows that sentence representation of language models fine-tuned with multiple negatives ranking loss, a contrastive objective, helps retrieve clean bitexts. Experiments show that parallel data mined from our approach substantially outperform the pre.. 2022. 8. 26. [2022-08-25] 오늘의 자연어처리 GenTUS: Simulating User Behaviour and Language in Task-oriented Dialogues with Generative Transformers User simulators (USs) are commonly used to train task-oriented dialogue systems (DSs) via reinforcement learning. The interactions often take place on semantic level for efficiency, but there is still a gap from semantic actions to natural language, which causes a mismatch between training and .. 2022. 8. 25. [2022-08-25] 오늘의 자연어처리 A Novel Multi-Task Learning Approach for Context-Sensitive Compound Type Identification in Sanskrit The phenomenon of compounding is ubiquitous in Sanskrit. It serves for achieving brevity in expressing thoughts, while simultaneously enriching the lexical and structural formation of the language. In this work, we focus on the Sanskrit Compound Type Identification (SaCTI) task, where we consider .. 2022. 8. 25. 이전 1 ··· 132 133 134 135 136 137 138 ··· 143 다음 반응형