본문 바로가기
반응형

페이퍼572

[2023-06-13] 오늘의 자연어처리 Trapping LLM Hallucinations Using Tagged Context Prompts Recent advances in large language models (LLMs), such as ChatGPT, have led to highly sophisticated conversation agents. However, these models suffer from "hallucinations," where the model generates false or fabricated information. Addressing this challenge is crucial, particularly with AI-driven platforms being adopted across various secto.. 2023. 6. 13.
[2023-06-12] 오늘의 자연어처리 Extensive Evaluation of Transformer-based Architectures for Adverse Drug Events Extraction Adverse Event (ADE) extraction is one of the core tasks in digital pharmacovigilance, especially when applied to informal texts. This task has been addressed by the Natural Language Processing community using large pre-trained language models, such as BERT. Despite the great number of Transformer-based arc.. 2023. 6. 12.
[2023-06-11] 오늘의 자연어처리 Mapping Brains with Language Models: A Survey Over the years, many researchers have seemingly made the same observation: Brain and language model activations exhibit some structural similarities, enabling linear partial mappings between features extracted from neural recordings and computational language models. In an attempt to evaluate how much evidence has been accumulated for this observatio.. 2023. 6. 11.
[2023-06-10] 오늘의 자연어처리 ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases Enabling large language models to effectively utilize real-world tools is crucial for achieving embodied intelligence. Existing approaches to tool learning have primarily relied on either extremely large language models, such as GPT-4, to attain generalized tool-use abilities in a zero-shot manner, or have utiliz.. 2023. 6. 10.
반응형