본문 바로가기
반응형

분류 전체보기599

[2022-11-25] 오늘의 자연어처리 Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks Recently, there has been significant progress in teaching language models to perform step-by-step reasoning to solve complex numerical reasoning tasks. Chain-of-thoughts prompting (CoT) is by far the state-of-art method for these tasks. CoT uses language models to perform both reasoning and comp.. 2022. 11. 25.
[2022-11-24] 오늘의 자연어처리 HaRiM$^+$: Evaluating Summary Quality with Hallucination Risk One of the challenges of developing a summarization model arises from the difficulty in measuring the factual inconsistency of the generated text. In this study, we reinterpret the decoder overconfidence-regularizing objective suggested in (Miao et al., 2021) as a hallucination risk measurement to better estimate the quality of genera.. 2022. 11. 24.
[2022-11-23] 오늘의 자연어처리 Re-contextualizing Fairness in NLP: The Case of India Recent research has revealed undesirable biases in NLP data and models. However, these efforts focus on social disparities in West, and are not directly portable to other geo-cultural contexts. In this paper, we focus on NLP fair-ness in the context of India. We start with a brief account of the prominent axes of social disparities in India. .. 2022. 11. 23.
[2022-11-22] 오늘의 자연어처리 Detecting Propaganda Techniques in Memes Propaganda can be defined as a form of communication that aims to influence the opinions or the actions of people towards a specific goal; this is achieved by means of well-defined rhetorical and psychological devices. Propaganda, in the form we know it today, can be dated back to the beginning of the 17th century. However, it is with the advent of the In.. 2022. 11. 22.
반응형