본문 바로가기
반응형

분류 전체보기599

[2023-03-06] 오늘의 자연어처리 Language-Universal Adapter Learning with Knowledge Distillation for End-to-End Multilingual Speech Recognition In this paper, we propose a language-universal adapter learning framework based on a pre-trained model for end-to-end multilingual automatic speech recognition (ASR). For acoustic modeling, the wav2vec 2.0 pre-trained model is fine-tuned by inserting language-specific and language-unive.. 2023. 3. 6.
[2023-03-05] 오늘의 자연어처리 Google USM: Scaling Automatic Speech Recognition Beyond 100 Languages We introduce the Universal Speech Model (USM), a single large model that performs automatic speech recognition (ASR) across 100+ languages. This is achieved by pre-training the encoder of the model on a large unlabeled multilingual dataset of 12 million (M) hours spanning over 300 languages, and fine-tuning on a smaller labele.. 2023. 3. 5.
[2023-03-05] 오늘의 자연어처리 Adopting the Multi-answer Questioning Task with an Auxiliary Metric for Extreme Multi-label Text Classification Utilizing the Label Hierarchy Extreme multi-label text classification utilizes the label hierarchy to partition extreme labels into multiple label groups, turning the task into simple multi-group multi-label classification tasks. Current research encodes labels as a vector with fixed l.. 2023. 3. 5.
[2023-03-04] 오늘의 자연어처리 Semiparametric Language Models Are Scalable Continual Learners Semiparametric language models (LMs) have shown promise in continuously learning from new text data by combining a parameterized neural LM with a growable non-parametric memory for memorizing new content. However, conventional semiparametric LMs will finally become prohibitive for computing and storing if they are applied to continua.. 2023. 3. 4.
반응형