본문 바로가기
반응형

오늘의 자연어 처리572

[2023-12-04] 오늘의 자연어처리 LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language Models Abstract:Large language models (LLMs) provide excellent text-generation capabilities, but standard prompting and generation methods generally do not lead to intentional or goal-directed agents and might necessitate considerable prompt tuning. This becomes particularly apparent in multi-turn conversations: even the be.. 2023. 12. 4.
[2023-12-03] 오늘의 자연어처리 Hubness Reduction Improves Sentence-BERT Semantic Spaces Abstract:Semantic representations of text, i.e. representations of natural language which capture meaning by geometry, are essential for areas such as information retrieval and document grouping. High-dimensional trained dense vectors have received much attention in recent years as such representations. We investigate the structure of sema.. 2023. 12. 3.
[2023-12-02] 오늘의 자연어처리 CoRec: An Easy Approach for Coordination Recognition Abstract:In this paper, we observe and address the challenges of the coordination recognition task. Most existing methods rely on syntactic parsers to identify the coordinators in a sentence and detect the coordination boundaries. However, state-of-the-art syntactic parsers are slow and suffer from errors, especially for long and complicated s.. 2023. 12. 2.
[2023-12-01] 오늘의 자연어처리 Reinforcement Replaces Supervision: Query focused Summarization using Deep Reinforcement Learning Abstract:Query-focused Summarization (QfS) deals with systems that generate summaries from document(s) based on a query. Motivated by the insight that Reinforcement Learning (RL) provides a generalization to Supervised Learning (SL) for Natural Language Generation, and thereby performs better (empir.. 2023. 12. 1.
반응형