본문 바로가기
반응형

분류 전체보기599

[2022-08-28] 오늘의 자연어처리 DPTDR: Deep Prompt Tuning for Dense Passage Retrieval Deep prompt tuning (DPT) has gained great success in most natural language processing~(NLP) tasks. However, it is not well-investigated in dense retrieval where fine-tuning~(FT) still dominates. When deploying multiple retrieval tasks using the same backbone model~(e.g., RoBERTa), FT-based methods are unfriendly in terms of deployment cost: e.. 2022. 8. 28.
[2022-08-27] 오늘의 자연어처리 Universality and diversity in word patterns Words are fundamental linguistic units that connect thoughts and things through meaning. However, words do not appear independently in a text sequence. The existence of syntactic rules induce correlations among neighboring words. Further, words are not evenly distributed but approximately follow a power law since terms with a pure semantic content appe.. 2022. 8. 27.
[2022-08-27] 오늘의 자연어처리 Bitext Mining for Low-Resource Languages via Contrastive Learning Mining high-quality bitexts for low-resource languages is challenging. This paper shows that sentence representation of language models fine-tuned with multiple negatives ranking loss, a contrastive objective, helps retrieve clean bitexts. Experiments show that parallel data mined from our approach substantially outperform the pre.. 2022. 8. 27.
[2022-08-26] 오늘의 자연어처리 Bitext Mining for Low-Resource Languages via Contrastive Learning Mining high-quality bitexts for low-resource languages is challenging. This paper shows that sentence representation of language models fine-tuned with multiple negatives ranking loss, a contrastive objective, helps retrieve clean bitexts. Experiments show that parallel data mined from our approach substantially outperform the pre.. 2022. 8. 26.
반응형