본문 바로가기
반응형

오늘의 자연어 처리572

[2023-03-09] 오늘의 자연어처리 Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction Large language models (LLMs) show great potential for synthetic data generation. This work shows that useful data can be synthetically generated even for tasks that cannot be solved directly by the LLM: we show that, for problems with structured outputs, it is possible to prompt an LLM to .. 2023. 3. 9.
[2023-03-08] 오늘의 자연어처리 Mining both Commonality and Specificity from Multiple Documents for Multi-Document Summarization The multi-document summarization task requires the designed summarizer to generate a short text that covers the important information of original documents and satisfies content diversity. This paper proposes a multi-document summarization approach based on hierarchical clustering of documents. It ut.. 2023. 3. 8.
[2023-03-07] 오늘의 자연어처리 NCL: Textual Backdoor Defense Using Noise-augmented Contrastive Learning At present, backdoor attacks attract attention as they do great harm to deep learning models. The adversary poisons the training data making the model being injected with a backdoor after being trained unconsciously by victims using the poisoned dataset. In the field of text, however, existing works do not provide sufficien.. 2023. 3. 7.
[2023-03-06] 오늘의 자연어처리 Language-Universal Adapter Learning with Knowledge Distillation for End-to-End Multilingual Speech Recognition In this paper, we propose a language-universal adapter learning framework based on a pre-trained model for end-to-end multilingual automatic speech recognition (ASR). For acoustic modeling, the wav2vec 2.0 pre-trained model is fine-tuned by inserting language-specific and language-unive.. 2023. 3. 6.
반응형