본문 바로가기
반응형

분류 전체보기599

[2022-11-21] 오늘의 자연어처리 Probing for Incremental Parse States in Autoregressive Language Models Next-word predictions from autoregressive neural language models show remarkable sensitivity to syntax. This work evaluates the extent to which this behavior arises as a result of a learned ability to maintain implicit representations of incremental syntactic structures. We extend work in syntactic probing to the incremental .. 2022. 11. 21.
[2022-11-20] 오늘의 자연어처리 Ignore Previous Prompt: Attack Techniques For Language Models Transformer-based large language models (LLMs) provide a powerful foundation for natural language tasks in large-scale customer-facing applications. However, studies that explore their vulnerabilities emerging from malicious user interaction are scarce. By proposing PromptInject, a prosaic alignment framework for mask-based iterative .. 2022. 11. 20.
[2022-11-20] 오늘의 자연어처리 Zero-Shot Dynamic Quantization for Transformer Inference We introduce a novel run-time method for significantly reducing the accuracy loss associated with quantizing BERT-like models to 8-bit integers. Existing methods for quantizing models either modify the training procedure,or they require an additional calibration step to adjust parameters that also requires a selected held-out dataset. Our .. 2022. 11. 20.
[2022-11-19] 오늘의 자연어처리 Style Classification of Rabbinic Literature for Detection of Lost Midrash Tanhuma Material Midrash collections are complex rabbinic works that consist of text in multiple languages, which evolved through long processes of unstable oral and written transmission. Determining the origin of a given passage in such a compilation is not always straightforward and is often a matter of dispute among sch.. 2022. 11. 19.
반응형