Papers

Learn more about AI2's Lasting Impact Award
Viewing 71-80 of 292 papers
  • Selective Annotation Makes Language Models Better Few-Shot Learners

    Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao YuICLR • Proceedings2023 Many recent approaches to natural language tasks are built on the remarkable abilities of large language models. Large language models can perform in-context learning, where they learn a new task from a few task demonstrations, without any parameter updates…
  • AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models

    Alexandra Chronopoulou, Matthew E. Peters, Alexander M. Fraser, Jesse DodgeFindings of EACL 20232023 Pretrained language models (PLMs) are trained on massive corpora, but often need to specialize to specific domains. A parameter-efficient adaptation method suggests training an adapter for each domain on the task of language modeling. This leads to good in…
  • Specializing Smaller Language Models towards Multi-Step Reasoning

    Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, Tushar KhotICML2023 The surprising ability of Large Language Models (LLMs) to perform well on complex reasoning with only few-shot chain-of-thought prompts is believed to emerge only in very large-scale models (100+ billion parameters). We show that such abilities can, in fact…
  • Do Embodied Agents Dream of Pixelated Sheep?: Embodied Decision Making using Language Guided World Modelling

    Kolby Nottingham, Prithviraj Ammanabrolu, Alane Suhr, Yejin Choi, Hanna Hajishirzi, Sameer Singh, Roy FoxarXiv2023 Reinforcement learning (RL) agents typically learn tabula rasa, without prior knowledge of the world, which makes learning complex tasks with sparse rewards difficult. If initialized with knowledge of high-level subgoals and transitions between subgoals, RL…
  • Does progress on ImageNet transfer to real-world datasets?

    Alexander W. Fang, Simon Kornblith, Ludwig SchmidtarXiv2023 Does progress on ImageNet transfer to real-world datasets? We investigate this question by evaluating ImageNet pre-trained models with varying accuracy (57% - 83%) on six practical image classification datasets. In particular, we study datasets collected with…
  • Reproducible scaling laws for contrastive language-image learning

    Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell Wortsman, Gabriel Ilharco, Cade Gordon, Christoph Schuhmann, Ludwig Schmidt, J. JitsevarXiv2022 Scaling up neural networks has led to remarkable performance across a wide range of tasks. Moreover, performance often follows reliable scaling laws as a function of training set size, model size, and compute, which offers valuable guidance as large-scale…
  • Continued Pretraining for Better Zero- and Few-Shot Promptability

    Zhaofeng Wu, Robert L. Logan IV, Pete Walsh, Akshita Bhagia, Dirk Groeneveld, Sameer Singh, Iz BeltagyEMNLP2022 Recently introduced language model prompting methods can achieve high accuracy in zero-and few-shot settings while requiring few to no learned task-specific parameters. Never-theless, these methods still often trail behind full model finetuning. In this work…
  • Exploring The Landscape of Distributional Robustness for Question Answering Models

    Anas Awadalla, Mitchell Wortsman, Gabriel Ilharco, Sewon Min, Ian H. Magnusson, Hannaneh Hajishirzi, Ludwig SchmidtFindings of EMNLP2022 We conduct a large empirical evaluation to investigate the landscape of distributional robustness in question answering. Our investigation spans over 350 models and 16 question answering datasets, including a di-verse set of architectures, model sizes, and…
  • Hyperdecoders: Instance-specific decoders for multi-task NLP

    Hamish Ivison, Matthew E. PetersFindings of EMNLP2022 We investigate input-conditioned hypernetworks for multi-tasking in NLP, generating parameter-efficient adaptations for a decoder using a hypernetwork conditioned on the output of an encoder. This approach produces a unique decoder for every input instance…
  • CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation

    Abhilasha Ravichander, Matt Gardner, Ana MarasovićEMNLP2022 The full power of human language-based communication cannot be realized without negation. All human languages have some form of negation. Despite this, negation remains a challenging phenomenon for current natural language understanding systems. To facilitate…