A Representation Modeling Based Language GAN with Completely Random Initialization
Text generative models trained via Maximum Likelihood Estimation (MLE) suffer from the notorious exposure bias problem, and Generative Adversarial Networks (GANs) are shown to have potential to tackle it. Existing language GANs adopt estimators like REINFORCE or continuous relaxations to model word distributions. The inherent limitations of such estimators lead current models to rely on pre-training techniques (MLE pre-training or pre-trained embeddings). Representation modeling methods which are free from those limitations, however, are seldom explored because of its poor performance in previous attempts. Our analyses reveal that invalid sampling method and unhealthy gradients are the main contributors to its unsatisfactory performance. In this work, we present two techniques to tackle these problems: dropout sampling and fully normalized LSTM. Based on these two techniques, we propose InitialGAN whose parameters are randomly initialized completely. Besides, we introduce a new evaluation metric, Least Coverage Rate, to better evaluate the quality of generated samples. The experimental results demonstrate that InitialGAN outperforms both MLE and other compared models. To the best of our knowledge, it is the first time a language GAN can outperform MLE without any pre-training techniques.
Recognizing and Extracting Cybersecurtity-relevant Entities from Text
Cyber Threat Intelligence (CTI) is information describing threat vectors, vulnerabilities, and attacks and is often used as training data for AI-based cyber defense systems such as Cybersecurity Knowledge Graphs (CKG). There is a strong need to develop community-accessible datasets to train existing AI-based cybersecurity pipelines to efficiently and accurately extract meaningful insights from CTI. We have created an initial unstructured CTI corpus from a variety of open sources that we are using to train and test cybersecurity entity models using the spaCy framework and exploring self-learning methods to automatically recognize cybersecurity entities. We also describe methods to apply cybersecurity domain entity linking with existing world knowledge from Wikidata. Our future work will survey and test spaCy NLP tools and create methods for continuous integration of new information extracted from text.
"Yeah, it does have a...Windows `98 Vibe'': Usability Study of Security Features in Programmable Logic Controllers
Programmable Logic Controllers (PLCs) drive industrial processes critical to society, e.g., water treatment and distribution, electricity and fuel networks. Search engines (e.g., Shodan) have highlighted that Programmable Logic Controllers (PLCs) are often left exposed to the Internet, one of the main reasons being the misconfigurations of security settings. This leads to the question -- why do these misconfigurations occur and, specifically, whether usability of security controls plays a part? To date, the usability of configuring PLC security mechanisms has not been studied. We present the first investigation through a task-based study and subsequent semi-structured interviews (N=19). We explore the usability of PLC connection configurations and two key security mechanisms (i.e., access levels and user administration). We find that the use of unfamiliar labels, layouts and misleading terminology exacerbates an already complex process of configuring security mechanisms. Our results uncover various (mis-) perceptions about the security controls and how design constraints, e.g., safety and lack of regular updates (due to long term nature of such systems), provide significant challenges to realization of modern HCI and usability principles. Based on these findings, we provide design recommendations to bring usable security in industrial settings at par with its IT counterpart.
'오늘의 자연어 처리' 카테고리의 다른 글
[2022-08-07] 오늘의 자연어처리 (0) | 2022.08.07 |
---|---|
[2022-08-07] 오늘의 자연어처리 (0) | 2022.08.07 |
[2022-08-07] 오늘의 자연어처리 (0) | 2022.08.07 |
[2022-08-07] 오늘의 자연어처리 (0) | 2022.08.07 |
[2022-08-07] 오늘의 자연어처리 (0) | 2022.08.07 |
댓글