본문 바로가기
반응형

전체 글599

[2023-01-11] 오늘의 자연어처리 ERNIE 3.0 Tiny: Frustratingly Simple Method to Improve Task-Agnostic Distillation Generalization Task-agnostic knowledge distillation attempts to address the problem of deploying large pretrained language model in resource-constrained scenarios by compressing a large pretrained model called teacher into a smaller one called student such that the student can be directly finetuned on downstream ta.. 2023. 1. 11.
[2023-01-10] 오늘의 자연어처리 Causal Categorization of Mental Health Posts using Transformers With recent developments in digitization of clinical psychology, NLP research community has revolutionized the field of mental health detection on social media. Existing research in mental health analysis revolves around the cross-sectional studies to classify users' intent on social media. For in-depth analysis, we investigate exis.. 2023. 1. 10.
[2023-01-09] 오늘의 자연어처리 HIT-SCIR at MMNLU-22: Consistency Regularization for Multilingual Spoken Language Understanding Multilingual spoken language understanding (SLU) consists of two sub-tasks, namely intent detection and slot filling. To improve the performance of these two sub-tasks, we propose to use consistency regularization based on a hybrid data augmentation strategy. The consistency regularization enforces th.. 2023. 1. 9.
[2023-01-08] 오늘의 자연어처리 Reprogramming Pretrained Language Models for Protein Sequence Representation Learning Machine Learning-guided solutions for protein learning tasks have made significant headway in recent years. However, success in scientific discovery tasks is limited by the accessibility of well-defined and labeled in-domain data. To tackle the low-data constraint, recent adaptions of deep learning models pretr.. 2023. 1. 8.
반응형