본문 바로가기
반응형

오늘의 자연어 처리572

[2022-11-04] 오늘의 자연어처리 M-SpeechCLIP: Leveraging Large-Scale, Pre-Trained Models for Multilingual Speech to Image Retrieval This work investigates the use of large-scale, pre-trained models (CLIP and HuBERT) for multilingual speech-image retrieval. For non-English speech-image retrieval, we outperform the current state-of-the-art performance by a wide margin when training separate models for each language, and show tha.. 2022. 11. 4.
[2022-11-03] 오늘의 자연어처리 TOE: A Grid-Tagging Discontinuous NER Model Enhanced by Embedding Tag/Word Relations and More Fine-Grained Tags So far, discontinuous named entity recognition (NER) has received increasing research attention and many related methods have surged such as hypergraph-based methods, span-based methods, and sequence-to-sequence (Seq2Seq) methods, etc. However, these methods more or less suffer from so.. 2022. 11. 3.
[2022-11-03] 오늘의 자연어처리 Joint Audio/Text Training for Transformer Rescorer of Streaming Speech Recognition Recently, there has been an increasing interest in two-pass streaming end-to-end speech recognition (ASR) that incorporates a 2nd-pass rescoring model on top of the conventional 1st-pass streaming ASR model to improve recognition accuracy while keeping latency low. One of the latest 2nd-pass rescoring model, Trans.. 2022. 11. 3.
[2022-11-02] 오늘의 자연어처리 Questioning the Validity of Summarization Datasets and Improving Their Factual Consistency The topic of summarization evaluation has recently attracted a surge of attention due to the rapid development of abstractive summarization systems. However, the formulation of the task is rather ambiguous, neither the linguistic nor the natural language processing community has succeeded in giving a mutua.. 2022. 11. 2.
반응형