본문 바로가기
반응형

논문572

[2023-01-26] 오늘의 자연어처리 Can Very Large Pretrained Language Models Learn Storytelling With A Few Examples? While pre-trained language models can generate individually fluent sentences for automatic story generation, they struggle to generate stories that are coherent, sensible and interesting. Current state-of-the-art (SOTA) story generation models explore using higher-level features such as plots or commonsense knowled.. 2023. 1. 26.
[2023-01-25] 오늘의 자연어처리 Blacks is to Anger as Whites is to Joy? Understanding Latent Affective Bias in Large Pre-trained Neural Language Models Groundbreaking inventions and highly significant performance improvements in deep learning based Natural Language Processing are witnessed through the development of transformer based large Pre-trained Language Models (PLMs). The wide availability of unlabeled data within human.. 2023. 1. 25.
[2023-01-24] 오늘의 자연어처리 Visual Writing Prompts: Character-Grounded Story Generation with Curated Image Sequences Current work on image-based story generation suffers from the fact that the existing image sequence collections do not have coherent plots behind them. We improve visual story generation by producing a new image-grounded dataset, Visual Writing Prompts (VWP). VWP contains almost 2K selected sequences of movi.. 2023. 1. 24.
[2023-01-23] 오늘의 자연어처리 Adapting Multilingual Speech Representation Model for a New, Underresourced Language through Multilingual Fine-tuning and Continued Pretraining In recent years, neural models learned through self-supervised pretraining on large scale multilingual text or speech data have exhibited promising results for underresourced languages, especially when a relatively large amount of data from related langu.. 2023. 1. 23.
반응형