본문 바로가기
반응형

분류 전체보기599

[2022-08-25] 오늘의 자연어처리 GenTUS: Simulating User Behaviour and Language in Task-oriented Dialogues with Generative Transformers User simulators (USs) are commonly used to train task-oriented dialogue systems (DSs) via reinforcement learning. The interactions often take place on semantic level for efficiency, but there is still a gap from semantic actions to natural language, which causes a mismatch between training and .. 2022. 8. 25.
[2022-08-25] 오늘의 자연어처리 A Novel Multi-Task Learning Approach for Context-Sensitive Compound Type Identification in Sanskrit The phenomenon of compounding is ubiquitous in Sanskrit. It serves for achieving brevity in expressing thoughts, while simultaneously enriching the lexical and structural formation of the language. In this work, we focus on the Sanskrit Compound Type Identification (SaCTI) task, where we consider .. 2022. 8. 25.
[2022-08-24] 오늘의 자연어처리 Few-Shot Table-to-Text Generation with Prefix-Controlled Generator Neural table-to-text generation approaches are data-hungry, limiting their adaptation for low-resource real-world applications. Previous works mostly resort to Pre-trained Language Models (PLMs) to generate fluent summaries of a table. However, they often contain hallucinated contents due to the uncontrolled nature of PLMs. Moreo.. 2022. 8. 24.
[2022-08-24] 오늘의 자연어처리 Generalized Attention Mechanism and Relative Position for Transformer In this paper, we propose generalized attention mechanism (GAM) by first suggesting a new interpretation for self-attention mechanism of Vaswani et al. . Following the interpretation, we provide description for different variants of attention mechanism which together form GAM. Further, we propose a new relative position repres.. 2022. 8. 24.
반응형