본문 바로가기
반응형

오늘의 자연어 처리572

[2023-03-02] 오늘의 자연어처리 Weighted Sampling for Masked Language Modeling Masked Language Modeling (MLM) is widely used to pretrain language models. The standard random masking strategy in MLM causes the pre-trained language models (PLMs) to be biased toward high-frequency tokens. Representation learning of rare tokens is poor and PLMs have limited performance on downstream tasks. To alleviate this frequency bias issue, w.. 2023. 3. 2.
[2023-03-01] 오늘의 자연어처리 Dr ChatGPT, tell me what I want to hear: How prompt knowledge impacts health answer correctness Generative pre-trained language models (GPLMs) like ChatGPT encode in the model's parameters knowledge the models observe during the pre-training phase. This knowledge is then used at inference to address the task specified by the user in their prompt. For example, for the question-answering task, the.. 2023. 3. 1.
[2023-02-28] 오늘의 자연어처리 ProofNet: Autoformalizing and Formally Proving Undergraduate-Level Mathematics We introduce ProofNet, a benchmark for autoformalization and formal proving of undergraduate-level mathematics. The ProofNet benchmarks consists of 371 examples, each consisting of a formal theorem statement in Lean 3, a natural language theorem statement, and a natural language proof. The problems are primarily drawn.. 2023. 2. 28.
[2023-02-27] 오늘의 자연어처리 What makes a language easy to deep-learn? Neural networks drive the success of natural language processing. A fundamental property of natural languages is their compositional structure, allowing us to describe new meanings systematically. However, neural networks notoriously struggle with systematic generalization and do not necessarily benefit from a compositional structure in emergent communic.. 2023. 2. 27.
반응형