본문 바로가기
반응형

분류 전체보기599

[2023-10-16] 오늘의 자연어처리 LLM-augmented Preference Learning from Natural Language Abstract:Finding preferences expressed in natural language is an important but challenging task. State-of-the-art(SotA) methods leverage transformer-based models such as BERT, RoBERTa, etc. and graph neural architectures such as graph attention networks. Since Large Language Models (LLMs) are equipped to deal with larger context lengths and.. 2023. 10. 16.
[2023-10-15] 오늘의 자연어처리 Tree-Planner: Efficient Close-loop Task Planning with Large Language Models Abstract:This paper studies close-loop task planning, which refers to the process of generating a sequence of skills (a plan) to accomplish a specific goal while adapting the plan based on real-time observations. Recently, prompting Large Language Models (LLMs) to generate actions iteratively has become a prevalent parad.. 2023. 10. 15.
[2023-10-14] 오늘의 자연어처리 DistillSpec: Improving Speculative Decoding via Knowledge Distillation Abstract:Speculative decoding (SD) accelerates large language model inference by employing a faster draft model for generating multiple tokens, which are then verified in parallel by the larger target model, resulting in the text generated according to the target model distribution. However, identifying a compact draft model .. 2023. 10. 14.
[2023-10-13] 오늘의 자연어처리 Can We Edit Multimodal Large Language Models? Abstract:In this paper, we focus on editing Multimodal Large Language Models (MLLMs). Compared to editing single-modal LLMs, multimodal model editing is more challenging, which demands a higher level of scrutiny and careful consideration in the editing process. To facilitate research in this area, we construct a new benchmark, dubbed MMEdit, for edit.. 2023. 10. 13.
반응형