본문 바로가기
반응형

분류 전체보기599

[2022-09-22] 오늘의 자연어처리 Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering When answering a question, humans utilize the information available across different modalities to synthesize a consistent and complete chain of thought (CoT). This process is normally a black box in the case of deep learning models like large-scale language models. Recently, science question benchmarks have.. 2022. 9. 22.
[2022-09-21] 오늘의 자연어처리 ALEXSIS-PT: A New Resource for Portuguese Lexical Simplification Lexical simplification (LS) is the task of automatically replacing complex words for easier ones making texts more accessible to various target populations (e.g. individuals with low literacy, individuals with learning disabilities, second language learners). To train and test models, LS systems usually require corpora that feature.. 2022. 9. 21.
[2022-09-20] 오늘의 자연어처리 ConFiguRe: Exploring Discourse-level Chinese Figures of Speech Figures of speech, such as metaphor and irony, are ubiquitous in literature works and colloquial conversations. This poses great challenge for natural language understanding since figures of speech usually deviate from their ostensible meanings to express deeper semantic implications. Previous research lays emphasis on the literary a.. 2022. 9. 20.
[2022-09-19] 오늘의 자연어처리 Examining Large Pre-Trained Language Models for Machine Translation: What You Don't Know About It Pre-trained language models (PLMs) often take advantage of the monolingual and multilingual dataset that is freely available online to acquire general or mixed domain knowledge before deployment into specific tasks. Extra-large PLMs (xLPLMs) are proposed very recently to claim supreme performances o.. 2022. 9. 19.
반응형