Aristo
Building the next generation of systems that can systematically reason, explain, and continually improve over time
- Systematic reasoning and explanation
- Teachable reasoning systems
- Continual learning with memory-based architectures
- Knowledge and belief
- Universal mathematical reasoning
Recent Updates
Towards Teachable Reasoning Systems
April 27, 2022This paper describes our work towards Teachable Reasoning Systems. First, EntailmentWriter searches for a chain of reasoning from facts it believes…
Memory-assisted prompt editing to improve GPT-3 after deployment
April 20, 2022Large LMs such as GPT-3 are powerful, but can commit mistakes that are obvious to humans. Memory-assisted prompt editing allows users to give…
DREAM: Improving Situational QA by First Elaborating the Situation
March 1, 2022When people answer questions about a specific situation, e.g., "I cheated on my mid-term exam last week. Was that wrong?", cognitive science suggests…
Explaining Answers with Entailment Trees
November 1, 2021EntailmentBank is a unique dataset of multi-step entailment trees. Each tree shows how known facts combine to entail the answer to a question. From…
BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief
November 1, 2021Although pretrained language models (PTLMs) contain significant amounts of world knowledge, they can still produce inconsistent answers to questions…
Research Areas
Teachable Reasoning Systems
By interacting with and giving feedback on a system’s reasoning, a user can teach the system so it continually improves over time – without model retraining.
Neuro-Symbolic Reasoning and Explanation
Solving problems by generating consistent, faithful chains of reasoning using neural components.
Modular Models
By learning to chain together existing models, complex problems can be solved, beyond the capabilities of the individual components.
Universal Mathematical Reasoners
Creating models with built-in mathematical reasoning skills, that can be rapidly fine-tuned for a wide variety of mathematical tasks.
Macaw is a high-performance question-answering (QA) model capable of outperforming other popular current language models, all while being an order of magnitude smaller. This demo allows you to explore Macaw's answers and compare them to those of the popular GPT-3 language model on a benchmark set of questions.
Try the demo

Macaw is a high-performance question-answering (QA) model capable of outperforming other popular current language models, all while being an order of magnitude smaller. This demo allows you to explore Macaw's answers and compare them to those of the popular GPT-3 language model on a benchmark set of questions.
Try the demo
Like RuleTaker, ProofWriter determines whether statements are True or False based on rules given in natural language - but also generates the proof of its answers.
Try the demo
Like RuleTaker, ProofWriter determines whether statements are True or False based on rules given in natural language - but also generates the proof of its answers.
Try the demoRecent Papers
Self-Refine: Iterative Refinement with Self-Feedback
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, K. Hermann, S. Welleck, A. Yazdanbakhsh, Peter ClarkNeurips • 2023 Like humans, large language models (LLMs) do not always generate the best output on their first try. Motivated by how humans refine their written text, we introduce Self-Refine, an approach for improving initial outputs from LLMs through iterative feedback…A Logic for Expressing Log-Precision Transformers
William Merrill, Ashish SabharwalNeurIPS • 2023 One way to interpret the reasoning power of transformer-based language models is to describe the types of logical rules they can resolve over some input text. Recently, Chiang et al. (2023) showed that finite-precision transformers can be equivalently…How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources
Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A. Smith, Iz Beltagy, Hanna HajishirziNeurIPS • 2023 In this work we explore recent advances in instruction-tuning language models on a range of open instruction-following datasets. Despite recent claims that open models can be on par with state-of-the-art proprietary models, these claims are often accompanied…Editing Common Sense in Transformers
Anshita Gupta*, Debanjan Mondal*, Akshay Krishna Sheshadri*, Wenlong Zhao, Xiang Lorraine Li*, Sarah Wiegreffe*, Niket Tandon*EMNLP • 2023 Editing model parameters directly in Transformers makes updating open-source transformer-based models possible without re-training. However, these editing methods have only been evaluated on statements about encyclopedic knowledge with a single correct answer…Increasing Probability Mass on Answer Choices Does Not Always Improve Accuracy
Sarah Wiegreffe, Matthew Finlayson, Oyvind Tafjord, Peter Clark, Ashish SabharwalEMNLP • 2023 When pretrained language models (LMs) are applied to discriminative tasks such as multiple-choice questions, they place probability mass on vocabulary tokens that aren't among the given answer choices. Spreading probability mass across multiple surface forms…
Recent Datasets
IfQA Counterfactual Reasoning Benchmark
3,800 open-domain questions designed to assess counterfactual reasoning abilities of NLP models
Counterfactual reasoning benchmark introduced in the EMNLP-2023 paper titled "IfQA: A Dataset for Open-domain Question Answering under Counterfactual Presuppositions".
Digital Socrates
DS Critique Bank contains annotated critiques of answers and explanations from "student" models.
DS Critique Bank (DSCB) is a dataset of multiple-choice questions with associated answers and explanations provided by "student models", along with "critiques" of the explanations provided by "critique models". Many of the instances have human annotations.
ParRoT (Parts and Relations of Things)
11,720 “X relation Y?” True/False questions on parts of everyday things and relational information about these parts
This is the dataset in "Do language models have coherent mental models of everyday things?", ACL 2023.
Belief and Reasoning Dataset
BaRDA: A Belief and REasoning Dataset that Separates Factual Accuracy and Reasoning Ability
BaRDa is a new belief and reasoning dataset for evaluating the factual correctness ("truth") and reasoning accuracy ("rationality", or "honesty") of new language models. It was created in collaboration with, and with the support of, the Open Philanthropy organization.
Recent Press
Persona-driven ChatGPT yields toxic, racist output
April 19, 2023
Changing ChatGPTs Persona Might Make It Malicious
April 17, 2023
This AI Paper Shows How ChatGPT’s Toxicity Can Increase Up To Six-Fold When Assigned A Persona
April 14, 2023
'They’re All So Dirty and Smelly:' Study Unlocks ChatGPT's Inner Racist
April 13, 2023
New study reveals ChatGPT's inherent toxicity when assigned different personas
April 13, 2023
ChatGPT can turn toxic just by changing its assigned persona, researchers say
April 12, 2023
Researchers discover a way to make ChatGPT consistently toxic
April 12, 2023
Researchers From Allen Institute for AI Introduce TeachMe: A Framework To Understand And Correct AI Models
January 17, 2023
Team
Chris Callison-BurchResearch
Peter ClarkResearch
Ben BoginYoung Investigator
Bhavana DalviResearch
Yuling GuPredoctoral Young Investigator
Shashank GuptaResearch
Ashwin KalyanResearch
Tushar KhotResearch
Bodhisattwa Prasad MajumderResearch
Kyle RichardsonResearch
Ashish SabharwalResearch
Oyvind TafjordResearch
Niket TandonResearch
Sarah WiegreffeYoung Investigator