본문 바로가기
반응형

페이퍼572

[2023-06-19] 오늘의 자연어처리 Matching Pairs: Attributing Fine-Tuned Models to their Pre-Trained Large Language Models The wide applicability and adaptability of generative large language models (LLMs) has enabled their rapid adoption. While the pre-trained models can perform many tasks, such models are often fine-tuned to improve their performance on various downstream applications. However, this leads to issues over violat.. 2023. 6. 19.
[2023-06-18] 오늘의 자연어처리 Matching Pairs: Attributing Fine-Tuned Models to their Pre-Trained Large Language Models The wide applicability and adaptability of generative large language models (LLMs) has enabled their rapid adoption. While the pre-trained models can perform many tasks, such models are often fine-tuned to improve their performance on various downstream applications. However, this leads to issues over violat.. 2023. 6. 18.
[2023-06-18] 오늘의 자연어처리 CMMLU: Measuring massive multitask language understanding in Chinese As the capabilities of large language models (LLMs) continue to advance, evaluating their performance becomes increasingly crucial and challenging. This paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese benchmark that covers various subjects, including natural science, social sciences, engineering, and.. 2023. 6. 18.
[2023-06-17] 오늘의 자연어처리 Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Theory of Mind Large Language Models (LLMs) perform complex reasoning by generating explanations for their predictions. However, a complementary goal of explanations is to also communicate useful knowledge that improves weaker agents. Hence, we investigate whether LLMs also make good teachers for weaker agents. In.. 2023. 6. 17.
반응형