Papers
See AI2's Award Winning Papers
Learn more about AI2's Lasting Impact Award
Viewing 1-10 of 178 papers
Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback
Yao Fu, Hao-Chun Peng, Tushar Khot, Mirella LapataarXiv.org • 2023 We study whether multiple large language models (LLMs) can autonomously improve each other in a negotiation game by playing, reflecting, and criticizing. We are interested in this question because if LLMs were able to improve each other, it would imply the…Complexity-Based Prompting for Multi-Step Reasoning
Yao Fu, Hao-Chun Peng, Ashish Sabharwal, Peter Clark, Tushar KhotICLR • 2023 We study the task of prompting large-scale language models to perform multi-step reasoning. Existing work shows that when prompted with a chain of thoughts (CoT), sequences of short sentences describing intermediate reasoning steps towards a final answer…Decomposed Prompting: A Modular Approach for Solving Complex Tasks
Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, Ashish SabharwalICLR • 2023 Few-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to solve various tasks. However, this approach struggles as the task complexity increases or when the individual reasoning steps of the task themselves are hard to learn…Transformers Can Be Expressed In First-Order Logic with Majority
William Merrill, Ashish SabharwalarXiv • 2023 Characterizing the implicit structure of the computation within neural networks is a foundational problem in the area of deep learning interpretability. Can the inner decision process of neural networks be captured symbolically in some familiar logic? We show…Do language models have coherent mental models of everyday things?
Yuling Gu, Bhavana Dalvi Mishra, Peter ClarkarXiv • 2022 When people think of everyday things like an “egg,” they typically have a mental image associated with it. This commonsense knowledge helps us understand how these everyday things work and how to interact with them. For example, when someone tries to make a…DISCO: Distilling Phrasal Counterfactuals with Large Language Models
Zeming Chen, Qiyue Gao, Kyle Richardson, Antoine Bosselut, Ashish SabharwalarXiv • 2022 Recent methods demonstrate that data augmentation using counterfactual knowledge can teach models the causal structure of a task, leading to robust and generalizable models. However, such counterfactual data often has a limited scale and diversity if…Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions
Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, Ashish SabharwalarXiv • 2022 Recent work has shown that large language models are capable of generating natural language reasoning steps or Chains-of-Thoughts (CoT) to answer a multi-step question when prompted to do so. This is insufficient, however, when the necessary knowledge is not…Lila: A Unified Benchmark for Mathematical Reasoning
Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, Ashwin KalyanEMNLP • 2022 Mathematical reasoning skills are essential for general-purpose intelligent systems to perform tasks from grocery shopping to climate modeling. Towards evaluating and improving AI systems in this domain, we propose LILA, a unified mathematical reasoning…Entailer: Answering Questions with Faithful and Truthful Chains of Reasoning
Oyvind Tafjord, Bhavana Dalvi Mishra, Peter ClarkEMNLP • 2022 Our goal is a question-answering (QA) system that can show how its answers are implied by its own internal beliefs via a systematic chain of reasoning . Such a capability would allow better understanding of why a model produced the answer it did. Our approach…Teaching Broad Reasoning Skills via Decomposition-Guided Contexts
Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, Ashish SabharwalEMNLP • 2022 Question-answering datasets require a broad set of reasoning skills. We show how to use question decompositions to teach language models these broad reasoning skills in a robust fashion. Specifically, we use widely available QDMR representations to…