본문 바로가기
반응형

자연어처리572

[2023-09-25] 오늘의 자연어처리 Accelerating Thematic Investment with Prompt Tuned Pretrained Language Models Abstract:Prompt Tuning is emerging as a scalable and cost-effective method to fine-tune Pretrained Language Models (PLMs). This study benchmarks the performance and computational efficiency of Prompt Tuning and baseline methods on a multi-label text classification task. This is applied to the use case of classifying co.. 2023. 9. 25.
[2023-09-24] 오늘의 자연어처리 InstructERC: Reforming Emotion Recognition in Conversation with a Retrieval Multi-task LLMs Framework Abstract:The development of emotion recognition in dialogue (ERC) has been consistently hindered by the complexity of pipeline designs, leading to ERC models that often overfit to specific datasets and dialogue patterns. In this study, we propose a novel approach, namely InstructERC, to reformul.. 2023. 9. 24.
[2023-09-23] 오늘의 자연어처리 SQUARE: Automatic Question Answering Evaluation using Multiple Positive and Negative References Abstract:Evaluation of QA systems is very challenging and expensive, with the most reliable approach being human annotations of correctness of answers for questions. Recent works (AVA, BEM) have shown that transformer LM encoder based similarity metrics transfer well for QA evaluation, but they are li.. 2023. 9. 23.
[2023-09-22] 오늘의 자연어처리 Grounded Complex Task Segmentation for Conversational Assistants Abstract:Following complex instructions in conversational assistants can be quite daunting due to the shorter attention and memory spans when compared to reading the same instructions. Hence, when conversational assistants walk users through the steps of complex tasks, there is a need to structure the task into manageable pieces of.. 2023. 9. 22.
반응형