본문 바로가기
반응형

오늘의 자연어 처리572

[2023-02-11] 오늘의 자연어처리 Massively Multilingual Language Models for Cross Lingual Fact Extraction from Low Resource Indian Languages Massive knowledge graphs like Wikidata attempt to capture world knowledge about multiple entities. Recent approaches concentrate on automatically enriching these KGs from text. However a lot of information present in the form of natural text in low resource languages is often missed out. C.. 2023. 2. 11.
[2023-02-10] 오늘의 자연어처리 Diagnosing and Rectifying Vision Models using Language Recent multi-modal contrastive learning models have demonstrated the ability to learn an embedding space suitable for building strong vision classifiers, by leveraging the rich information in large-scale image-caption datasets. Our work highlights a distinct advantage of this multi-modal embedding space: the ability to diagnose vision classi.. 2023. 2. 10.
[2023-02-09] 오늘의 자연어처리 Learning Translation Quality Evaluation on Low Resource Languages from Large Language Models Learned metrics such as BLEURT have in recent years become widely employed to evaluate the quality of machine translation systems. Training such metrics requires data which can be expensive and difficult to acquire, particularly for lower-resource languages. We show how knowledge can be distilled from La.. 2023. 2. 9.
[2023-02-08] 오늘의 자연어처리 Controllable Lexical Simplification for English Fine-tuning Transformer-based approaches have recently shown exciting results on sentence simplification task. However, so far, no research has applied similar approaches to the Lexical Simplification (LS) task. In this paper, we present ConLS, a Controllable Lexical Simplification system fine-tuned with T5 (a Transformer-based model pre-trained wi.. 2023. 2. 8.
반응형