본문 바로가기
반응형

논문572

[2023-07-15] 오늘의 자연어처리 Parmesan: mathematical concept extraction for education Mathematics is a highly specialized domain with its own unique set of challenges that has seen limited study in natural language processing. However, mathematics is used in a wide variety of fields and multidisciplinary research in many different domains often relies on an understanding of mathematical concepts. To aid researchers coming fr.. 2023. 7. 15.
[2023-07-15] 오늘의 자연어처리 mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs Modular vision-language models (Vision-LLMs) align pretrained image encoders with (pretrained) large language models (LLMs), representing a computationally much more efficient alternative to end-to-end training of large vision-language models from scratch, which is prohibitively expensive for most. Vision-LLMs instead post-hoc condition .. 2023. 7. 15.
[2023-07-14] 오늘의 자연어처리 Enhancing Portuguese Sign Language Animation with Dynamic Timing and Mouthing Current signing avatars are often described as unnatural as they cannot accurately reproduce all the subtleties of synchronized body behaviors of a human signer. In this paper, we propose a new dynamic approach for transitions between signs, focusing on mouthing animations for Portuguese Sign Language. Although native .. 2023. 7. 14.
[2023-07-13] 오늘의 자연어처리 SuryaKiran at MEDIQA-Sum 2023: Leveraging LoRA for Clinical Dialogue Summarization Finetuning Large Language Models helps improve the results for domain-specific use cases. End-to-end finetuning of large language models is time and resource intensive and has high storage requirements to store the finetuned version of the large language model. Parameter Efficient Fine Tuning (PEFT) methods addres.. 2023. 7. 13.
반응형