본문 바로가기
반응형

전체 글599

[2023-03-30] 오늘의 자연어처리 Joint embedding in Hierarchical distance and semantic representation learning for link prediction The link prediction task aims to predict missing entities or relations in the knowledge graph and is essential for the downstream application. Existing well-known models deal with this task by mainly focusing on representing knowledge graph triplets in the distance space or semantic space. However, .. 2023. 3. 30.
[2023-03-29] 오늘의 자연어처리 TextMI: Textualize Multimodal Information for Integrating Non-verbal Cues in Pre-trained Language Models Pre-trained large language models have recently achieved ground-breaking performance in a wide variety of language understanding tasks. However, the same model can not be applied to multimodal behavior understanding tasks (e.g., video sentiment/humor detection) unless non-verbal features (e.g.. 2023. 3. 29.
[2023-03-28] 오늘의 자연어처리 Enhancing Unsupervised Speech Recognition with Diffusion GANs We enhance the vanilla adversarial training method for unsupervised Automatic Speech Recognition (ASR) by a diffusion-GAN. Our model (1) injects instance noises of various intensities to the generator's output and unlabeled reference text which are sampled from pretrained phoneme language models with a length constraint, (2) asks diff.. 2023. 3. 28.
[2023-03-27] 오늘의 자연어처리 Retrieval-Augmented Classification with Decoupled Representation Pretrained language models (PLMs) have shown marvelous improvements across various NLP tasks. Most Chinese PLMs simply treat an input text as a sequence of characters, and completely ignore word information. Although Whole Word Masking can alleviate this, the semantics in words is still not well represented. In this paper, we revis.. 2023. 3. 27.
반응형