본문 바로가기
반응형

논문572

[2023-07-19] 오늘의 자연어처리 Latent Jailbreak: A Benchmark for Evaluating Text Safety and Output Robustness of Large Language Models Researchers have invested considerable effort into ensuring that large language models (LLMs) align with human values, using various training techniques, such as instruction tuning and Reinforcement Learning from Human or AI Feedback (RLHF/RLAIF), to guard against text unsafety. However, these.. 2023. 7. 19.
[2023-07-18] 오늘의 자연어처리 Phoneme-retrieval; voice recognition; vowels recognition A phoneme-retrieval technique is proposed, which is due to the particular way of the construction of the network. An initial set of neurons is given. The number of these neurons is approximately equal to the number of typical structures of the data. For example if the network is built for voice retrieval then the number of neurons must be .. 2023. 7. 18.
[2023-07-17] 오늘의 자연어처리 To share or not to share: What risks would laypeople accept to give sensitive data to differentially-private NLP systems? Although the NLP community has adopted central differential privacy as a go-to framework for privacy-preserving model training or data sharing, the choice and interpretation of the key parameter, privacy budget $\varepsilon$ that governs the strength of privacy protection, re.. 2023. 7. 17.
[2023-07-16] 오늘의 자연어처리 Personalization for BERT-based Discriminative Speech Recognition Rescoring Recognition of personalized content remains a challenge in end-to-end speech recognition. We explore three novel approaches that use personalized content in a neural rescoring step to improve recognition: gazetteers, prompting, and a cross-attention based encoder-decoder model. We use internal de-identified en-US data fro.. 2023. 7. 16.
반응형