본문 바로가기
반응형

분류 전체보기599

[2022-11-14] 오늘의 자연어처리 BERT on a Data Diet: Finding Important Examples by Gradient-Based Pruning Current pre-trained language models rely on large datasets for achieving state-of-the-art performance. However, past research has shown that not all examples in a dataset are equally important during training. In fact, it is sometimes possible to prune a considerable fraction of the training set while maintaining the test .. 2022. 11. 14.
[2022-11-13] 오늘의 자연어처리 An Inclusive Notion of Text Natural language processing researchers develop models of grammar, meaning and human communication based on written text. Due to task and data differences, what is considered text can vary substantially across studies. A conceptual framework for systematically capturing these differences is lacking. We argue that clarity on the notion of text is crucial for reproducib.. 2022. 11. 13.
[2022-11-12] 오늘의 자연어처리 Exploring Robustness of Prefix Tuning in Noisy Data: A Case Study in Financial Sentiment Analysis The invention of transformer-based models such as BERT, GPT, and RoBERTa has enabled researchers and financial companies to finetune these powerful models and use them in different downstream tasks to achieve state-of-the-art performance. Recently, a lightweight alternative (approximately 0.1% - 3% .. 2022. 11. 12.
[2022-11-11] 오늘의 자연어처리 Evaluating and Improving Context Attention Distribution on Multi-Turn Response Generation using Self-Contained Distractions Despite the rapid progress of open-domain generation-based conversational agents, most deployed systems treat dialogue contexts as single-turns, while systems dealing with multi-turn contexts are less studied. There is a lack of a reliable metric for evaluating multi-turn m.. 2022. 11. 11.
반응형