반응형 자연어처리572 [2022-09-27] 오늘의 자연어처리 Approaching English-Polish Machine Translation Quality Assessment with Neural-based Methods This paper presents our contribution to the PolEval 2021 Task 2: Evaluation of translation quality assessment metrics. We describe experiments with pre-trained language models and state-of-the-art frameworks for translation quality assessment in both nonblind and blind versions of the task. Our solutions .. 2022. 9. 27. [2022-09-26] 오늘의 자연어처리 Learning to Write with Coherence From Negative Examples Coherence is one of the critical factors that determine the quality of writing. We propose writing relevance (WR) training method for neural encoder-decoder natural language generation (NLG) models which improves coherence of the continuation by leveraging negative examples. WR loss regresses the vector representation of the context and gen.. 2022. 9. 26. [2022-09-25] 오늘의 자연어처리 Scope of Pre-trained Language Models for Detecting Conflicting Health Information An increasing number of people now rely on online platforms to meet their health information needs. Thus identifying inconsistent or conflicting textual health information has become a safety-critical task. Health advice data poses a unique challenge where information that is accurate in the context of one diagnosi.. 2022. 9. 25. [2022-09-25] 오늘의 자연어처리 Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks Automatically predicting the outcome of subjective listening tests is a challenging task. Ratings may vary from person to person even if preferences are consistent across listeners. While previous work has focused on predicting listeners' ratings (mean opinion scores) of .. 2022. 9. 25. 이전 1 ··· 122 123 124 125 126 127 128 ··· 143 다음 반응형