반응형 자연어처리572 [2022-10-03] 오늘의 자연어처리 Depth-Wise Attention (DWAtt): A Layer Fusion Method for Data-Efficient Classification Language Models pretrained on large textual data have been shown to encode different types of knowledge simultaneously. Traditionally, only the features from the last layer are used when adapting to new tasks or data. We put forward that, when using or finetuning deep pretrained models, intermediate layer featu.. 2022. 10. 3. [2022-10-03] 오늘의 자연어처리 An Equal-Size Hard EM Algorithm for Diverse Dialogue Generation Open-domain dialogue systems aim to interact with humans through natural language texts in an open-ended fashion. However, the widely successful neural networks may not work well for dialogue systems, as they tend to generate generic responses. In this work, we propose an Equal-size Hard Expectation--Maximization (EqHard-EM) algorit.. 2022. 10. 3. [2022-10-02] 오늘의 자연어처리 Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing A common recent approach to semantic parsing augments sequence-to-sequence models by retrieving and appending a set of training samples, called exemplars. The effectiveness of this recipe is limited by the ability to retrieve informative exemplars that help produce the correct parse, which is especially challen.. 2022. 10. 2. [2022-10-01] 오늘의 자연어처리 Who is GPT-3? An Exploration of Personality, Values and Demographics Language models such as GPT-3 have caused a furore in the research community. Some studies found that GPT-3 has some creative abilities and makes mistakes that are on par with human behaviour. This paper answers a related question: who is GPT-3? We administered two validated measurement tools to GPT-3 to assess its personality,.. 2022. 10. 1. 이전 1 ··· 120 121 122 123 124 125 126 ··· 143 다음 반응형