반응형 자연어처리572 [2022-08-10] 오늘의 자연어처리 The Analysis of Synonymy and Antonymy in Discourse Relations: An interpretable Modeling Approach The idea that discourse relations are construed through explicit content and shared, or implicit, knowledge between producer and interpreter is ubiquitous in discourse research and linguistics. However, the actual contribution of the lexical semantics of arguments is unclear. We propose a computation.. 2022. 8. 10. [2022-08-07] 오늘의 자연어처리 Recognizing and Extracting Cybersecurtity-relevant Entities from Text Cyber Threat Intelligence (CTI) is information describing threat vectors, vulnerabilities, and attacks and is often used as training data for AI-based cyber defense systems such as Cybersecurity Knowledge Graphs (CKG). There is a strong need to develop community-accessible datasets to train existing AI-based cybersecurity pipe.. 2022. 8. 7. [2022-08-07] 오늘의 자연어처리 Prompt Tuning for Generative Multimodal Pretrained Models Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural language pretraining and even vision pretraining. In this work, we explore the transfer of prompt tuning to multimodal pretraining, with a focus on generative multimodal pretrained models, instead of contrastive ones. Specifically, we imple.. 2022. 8. 7. [2022-08-07] 오늘의 자연어처리 A Representation Modeling Based Language GAN with Completely Random Initialization Text generative models trained via Maximum Likelihood Estimation (MLE) suffer from the notorious exposure bias problem, and Generative Adversarial Networks (GANs) are shown to have potential to tackle it. Existing language GANs adopt estimators like REINFORCE or continuous relaxations to model word distributions. .. 2022. 8. 7. 이전 1 ··· 139 140 141 142 143 다음 반응형