본문 바로가기
반응형

분류 전체보기599

[2023-02-16] 오늘의 자연어처리 SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for Classification in Low-Resource Domains Prompting pre-trained language models leads to promising results across natural language processing tasks but is less effective when applied in low-resource domains, due to the domain gap between the pre-training data and the downstream task. In this work, we bridge this gap with a novel and ligh.. 2023. 2. 16.
[2023-02-15] 오늘의 자연어처리 Towards Agile Text Classifiers for Everyone Text-based safety classifiers are widely used for content moderation and increasingly to tune generative language model behavior - a topic of growing concern for the safety of digital assistants and chatbots. However, different policies require different classifiers, and safety policies themselves improve from iteration and adaptation. This paper intro.. 2023. 2. 15.
[2023-02-14] 오늘의 자연어처리 Translating Natural Language to Planning Goals with Large-Language Models Recent large language models (LLMs) have demonstrated remarkable performance on a variety of natural language processing (NLP) tasks, leading to intense excitement about their applicability across various domains. Unfortunately, recent work has also shown that LLMs are unable to perform accurate reasoning nor solve plannin.. 2023. 2. 14.
[2023-02-13] 오늘의 자연어처리 Robust Question Answering against Distribution Shifts with Test-Time Adaptation: An Empirical Study A deployed question answering (QA) model can easily fail when the test data has a distribution shift compared to the training data. Robustness tuning (RT) methods have been widely studied to enhance model robustness against distribution shifts before model deployment. However, can we improve a mod.. 2023. 2. 13.
반응형