본문 바로가기
반응형

논문572

[2023-05-20] 오늘의 자연어처리 NollySenti: Leveraging Transfer Learning and Machine Translation for Nigerian Movie Sentiment Classification Africa has over 2000 indigenous languages but they are under-represented in NLP research due to lack of datasets. In recent years, there have been progress in developing labeled corpora for African languages. However, they are often available in a single domain and may not generalize to o.. 2023. 5. 20.
[2023-05-19] 오늘의 자연어처리 M3KE: A Massive Multi-Level Multi-Subject Knowledge Evaluation Benchmark for Chinese Large Language Models Large language models have recently made tremendous progress in a variety of aspects, e.g., cross-task generalization, instruction following. Comprehensively evaluating the capability of large language models in multiple tasks is of great importance. In this paper, we propose M3KE, a Massiv.. 2023. 5. 19.
[2023-05-18] 오늘의 자연어처리 Life of PII -- A PII Obfuscation Transformer Protecting sensitive information is crucial in today's world of Large Language Models (LLMs) and data-driven services. One common method used to preserve privacy is by using data perturbation techniques to reduce overreaching utility of (sensitive) Personal Identifiable Information (PII) data while maintaining its statistical and semantic properties. .. 2023. 5. 18.
[2023-05-17] 오늘의 자연어처리 Recyclable Tuning for Continual Pre-training Continual pre-training is the paradigm where pre-trained language models (PLMs) continually acquire fresh knowledge from growing data and gradually get upgraded. Before an upgraded PLM is released, we may have tuned the original PLM for various tasks and stored the adapted weights. However, when tuning the upgraded PLM, these outdated adapted weights .. 2023. 5. 17.
반응형