본문 바로가기
반응형

논문572

[2023-10-09] 오늘의 자연어처리 Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! Abstract:Optimizing large language models (LLMs) for downstream use cases often involves the customization of pre-trained LLMs through further fine-tuning. Meta's open release of Llama models and OpenAI's APIs for fine-tuning GPT-3.5 Turbo on custom datasets also encourage this practice. But, what are the s.. 2023. 10. 9.
[2023-10-08] 오늘의 자연어처리 Evaluating Self-Supervised Speech Representations for Indigenous American Languages Abstract:The application of self-supervision to speech representation learning has garnered significant interest in recent years, due to its scalability to large amounts of unlabeled data. However, much progress, both in terms of pre-training and downstream evaluation, has remained concentrated in monolingual mod.. 2023. 10. 8.
[2023-10-07] 오늘의 자연어처리 The North System for Formosa Speech Recognition Challenge 2023 Abstract:This report provides a concise overview of the proposed North system, which aims to achieve automatic word/syllable recognition for Taiwanese Hakka (Sixian). The report outlines three key components of the system: the acquisition, composition, and utilization of the training data; the architecture of the model; and the hardw.. 2023. 10. 7.
[2023-10-05] 오늘의 자연어처리 Unveiling the Pitfalls of Knowledge Editing for Large Language Models Abstract:As the cost associated with fine-tuning Large Language Models (LLMs) continues to rise, recent research efforts have pivoted towards developing methodologies to edit implicit knowledge embedded within LLMs. Yet, there's still a dark cloud lingering overhead -- will knowledge editing trigger butterfly effect? since it .. 2023. 10. 5.
반응형