본문 바로가기
반응형

분류 전체보기599

[2023-01-27] 오늘의 자연어처리 One Model for All Domains: Collaborative Domain-Prefix Tuning for Cross-Domain NER Cross-domain NER is a challenging task to address the low-resource problem in practical scenarios. Previous typical solutions mainly obtain a NER model by pre-trained language models (PLMs) with data from a rich-resource domain and adapt it to the target domain. Owing to the mismatch issue among entity types in di.. 2023. 1. 27.
[2023-01-26] 오늘의 자연어처리 Can Very Large Pretrained Language Models Learn Storytelling With A Few Examples? While pre-trained language models can generate individually fluent sentences for automatic story generation, they struggle to generate stories that are coherent, sensible and interesting. Current state-of-the-art (SOTA) story generation models explore using higher-level features such as plots or commonsense knowled.. 2023. 1. 26.
[2023-01-25] 오늘의 자연어처리 Blacks is to Anger as Whites is to Joy? Understanding Latent Affective Bias in Large Pre-trained Neural Language Models Groundbreaking inventions and highly significant performance improvements in deep learning based Natural Language Processing are witnessed through the development of transformer based large Pre-trained Language Models (PLMs). The wide availability of unlabeled data within human.. 2023. 1. 25.
[2023-01-24] 오늘의 자연어처리 Visual Writing Prompts: Character-Grounded Story Generation with Curated Image Sequences Current work on image-based story generation suffers from the fact that the existing image sequence collections do not have coherent plots behind them. We improve visual story generation by producing a new image-grounded dataset, Visual Writing Prompts (VWP). VWP contains almost 2K selected sequences of movi.. 2023. 1. 24.
반응형