본문 바로가기
반응형

오늘의 자연어 처리572

[2023-04-25] 오늘의 자연어처리 Hi Sheldon! Creating Deep Personalized Characters from TV Shows Imagine an interesting multimodal interactive scenario that you can see, hear, and chat with an AI-generated digital character, who is capable of behaving like Sheldon from The Big Bang Theory, as a DEEP copy from appearance to personality. Towards this fantastic multimodal chatting scenario, we propose a novel task, named Deep Pers.. 2023. 4. 25.
[2023-04-24] 오늘의 자연어처리 GPT-NER: Named Entity Recognition via Large Language Models Despite the fact that large-scale Language Models (LLM) have achieved SOTA performances on a variety of NLP tasks, its performance on NER is still significantly below supervised baselines. This is due to the gap between the two tasks the NER and LLMs: the former is a sequence labeling task in nature while the latter is a text-generation.. 2023. 4. 24.
[2023-04-23] 오늘의 자연어처리 A Latent Space Theory for Emergent Abilities in Large Language Models Languages are not created randomly but rather to communicate information. There is a strong association between languages and their underlying meanings, resulting in a sparse joint distribution that is heavily peaked according to their correlations. Moreover, these peak values happen to match with the marginal distribution of .. 2023. 4. 23.
[2023-04-22] 오늘의 자연어처리 Analyzing FOMC Minutes: Accuracy and Constraints of Language Models This research article analyzes the language used in the official statements released by the Federal Open Market Committee (FOMC) after its scheduled meetings to gain insights into the impact of FOMC official statements on financial markets and economic forecasting. The study reveals that the FOMC is careful to avoid expressing e.. 2023. 4. 22.
반응형