본문 바로가기
반응형

분류 전체보기599

[2022-11-10] 오늘의 자연어처리 SocioProbe: What, When, and Where Language Models Learn about Sociodemographics Pre-trained language models (PLMs) have outperformed other NLP models on a wide range of tasks. Opting for a more thorough understanding of their capabilities and inner workings, researchers have established the extend to which they capture lower-level knowledge like grammaticality, and mid-level semantic knowledge l.. 2022. 11. 10.
[2022-11-09] 오늘의 자연어처리 How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers The attention mechanism is considered the backbone of the widely-used Transformer architecture. It contextualizes the input by computing input-specific attention matrices. We find that this mechanism, while powerful and elegant, is not as important as typically thought for pretrained langu.. 2022. 11. 9.
[2022-11-08] 오늘의 자연어처리 Once-for-All Sequence Compression for Self-Supervised Speech Models The sequence length along the time axis is often the dominant factor of the computational cost of self-supervised speech models. Works have been proposed to reduce the sequence length for lowering the computational cost. However, different downstream tasks have different tolerance of sequence compressing, so a model that produce.. 2022. 11. 8.
[2022-11-07] 오늘의 자연어처리 A speech corpus for chronic kidney disease In this study, we present a speech corpus of patients with chronic kidney disease (CKD) that will be used for research on pathological voice analysis, automatic illness identification, and severity prediction. This paper introduces the steps involved in creating this corpus, including the choice of speech-related parameters and speech lists as well as t.. 2022. 11. 7.
반응형