본문 바로가기
반응형

분류 전체보기599

[2023-05-09] 오늘의 자연어처리 SI-LSTM: Speaker Hybrid Long-short Term Memory and Cross Modal Attention for Emotion Recognition in Conversation Emotion Recognition in Conversation~(ERC) across modalities is of vital importance for a variety of applications, including intelligent healthcare, artificial intelligence for conversation, and opinion mining over chat history. The crux of ERC is to model both cross-modality and cross.. 2023. 5. 9.
[2023-05-08] 오늘의 자연어처리 In-context Learning as Maintaining Coherency: A Study of On-the-fly Machine Translation Using Large Language Models The phenomena of in-context learning has typically been thought of as "learning from examples". In this work which focuses on Machine Translation, we present a perspective of in-context learning as the desired generation task maintaining coherency with its context, i.e., the prompt.. 2023. 5. 8.
[2023-05-08] 오늘의 자연어처리 SemEval-2023 Task 7: Multi-Evidence Natural Language Inference for Clinical Trial Data This paper describes the results of SemEval 2023 task 7 -- Multi-Evidence Natural Language Inference for Clinical Trial Data (NLI4CT) -- consisting of 2 tasks, a Natural Language Inference (NLI) task, and an evidence selection task on clinical trial data. The proposed challenges require multi-hop biomedical an.. 2023. 5. 8.
[2023-05-07] 오늘의 자연어처리 What changes when you randomly choose BPE merge operations? Not much We introduce three simple randomized variants of byte pair encoding (BPE) and explore whether randomizing the selection of merge operations substantially affects a downstream machine translation task. We focus on translation into morphologically rich languages, hypothesizing that this task may show sensitivity to the method of .. 2023. 5. 7.
반응형