본문 바로가기
반응형

자연어처리572

[2023-09-13] 오늘의 자연어처리 MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning Abstract:We introduce MAmmoTH, a series of open-source large language models (LLMs) specifically tailored for general math problem-solving. The MAmmoTH models are trained on MathInstruct, our meticulously curated instruction tuning dataset. MathInstruct is compiled from 13 math datasets with intermediate rationales, six o.. 2023. 9. 13.
[2023-09-12] 오늘의 자연어처리 ConDA: Contrastive Domain Adaptation for AI-generated Text Detection Large language models (LLMs) are increasingly being used for generating text in a variety of use cases, including journalistic news articles. Given the potential malicious nature in which these LLMs can be used to generate disinformation at scale, it is important to build effective detectors for such AI-generated text. Given th.. 2023. 9. 12.
[2023-09-11] 오늘의 자연어처리 ImageBind-LLM: Multi-modality Instruction Tuning We present ImageBind-LLM, a multi-modality instruction tuning method of large language models (LLMs) via ImageBind. Existing works mainly focus on language and image instruction tuning, different from which, our ImageBind-LLM can respond to multi-modality conditions, including audio, 3D point clouds, video, and their embedding-space arithmetic by .. 2023. 9. 11.
[2023-09-10] 오늘의 자연어처리 Word segmentation granularity in Korean Abstract:This paper describes word {segmentation} granularity in Korean language processing. From a word separated by blank space, which is termed an eojeol, to a sequence of morphemes in Korean, there are multiple possible levels of word segmentation granularity in Korean. For specific language processing and corpus annotation tasks, several different gra.. 2023. 9. 10.
반응형