Aristo

Building the next generation of systems that can systematically reason, explain, and continually improve over time


Diagram showing entailment tree from hypothesis and text
Our research includes pioneering work on:
  • Systematic reasoning and explanation
  • Teachable reasoning systems
  • Continual learning with memory-based architectures
  • Knowledge and belief
  • Universal mathematical reasoning

Recent Updates

Research Areas

Teachable Reasoning Systems

By interacting with and giving feedback on a system’s reasoning, a user can teach the system so it continually improves over time – without model retraining.

Modular Models

By learning to chain together existing models, complex problems can be solved, beyond the capabilities of the individual components.

Universal Mathematical Reasoners

Creating models with built-in mathematical reasoning skills, that can be rapidly fine-tuned for a wide variety of mathematical tasks.

  • A QA model that outperforms other popular language models while being an order of magnitude smaller | Aristo, Research Visualization

    Macaw is a high-performance question-answering (QA) model capable of outperforming other popular current language models, all while being an order of magnitude smaller. This demo allows you to explore Macaw's answers and compare them to those of the popular GPT-3 language model on a benchmark set of questions.

    Try the demo
    Macaw
  • Macaw
    A QA model that outperforms other popular language models while being an order of magnitude smaller | Aristo, Research Visualization

    Macaw is a high-performance question-answering (QA) model capable of outperforming other popular current language models, all while being an order of magnitude smaller. This demo allows you to explore Macaw's answers and compare them to those of the popular GPT-3 language model on a benchmark set of questions.

    Try the demo
  • ProofWriter OpenGraph image
    Generating Implications, Proofs, and Abductive Statements over Natural Language | Aristo

    Like RuleTaker, ProofWriter determines whether statements are True or False based on rules given in natural language - but also generates the proof of its answers.

    Try the demo
  • ProofWriter OpenGraph image
    Generating Implications, Proofs, and Abductive Statements over Natural Language | Aristo

    Like RuleTaker, ProofWriter determines whether statements are True or False based on rules given in natural language - but also generates the proof of its answers.

    Try the demo
    • Self-Refine: Iterative Refinement with Self-Feedback

      Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, K. Hermann, S. Welleck, A. Yazdanbakhsh, Peter ClarkNeurips2023 Like humans, large language models (LLMs) do not always generate the best output on their first try. Motivated by how humans refine their written text, we introduce Self-Refine, an approach for improving initial outputs from LLMs through iterative feedback…
    • A Logic for Expressing Log-Precision Transformers

      William Merrill, Ashish SabharwalNeurIPS2023 One way to interpret the reasoning power of transformer-based language models is to describe the types of logical rules they can resolve over some input text. Recently, Chiang et al. (2023) showed that finite-precision transformers can be equivalently…
    • How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources

      Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A. Smith, Iz Beltagy, Hanna HajishirziNeurIPS2023 In this work we explore recent advances in instruction-tuning language models on a range of open instruction-following datasets. Despite recent claims that open models can be on par with state-of-the-art proprietary models, these claims are often accompanied…
    • Editing Common Sense in Transformers

      Anshita Gupta*, Debanjan Mondal*, Akshay Krishna Sheshadri*, Wenlong Zhao, Xiang Lorraine Li*, Sarah Wiegreffe*, Niket Tandon*EMNLP2023 Editing model parameters directly in Transformers makes updating open-source transformer-based models possible without re-training. However, these editing methods have only been evaluated on statements about encyclopedic knowledge with a single correct answer…
    • Increasing Probability Mass on Answer Choices Does Not Always Improve Accuracy

      Sarah Wiegreffe, Matthew Finlayson, Oyvind Tafjord, Peter Clark, Ashish SabharwalEMNLP2023 When pretrained language models (LMs) are applied to discriminative tasks such as multiple-choice questions, they place probability mass on vocabulary tokens that aren't among the given answer choices. Spreading probability mass across multiple surface forms…

    IfQA Counterfactual Reasoning Benchmark

    3,800 open-domain questions designed to assess counterfactual reasoning abilities of NLP models

    Counterfactual reasoning benchmark introduced in the EMNLP-2023 paper titled "IfQA: A Dataset for Open-domain Question Answering under Counterfactual Presuppositions".

    Digital Socrates

    DS Critique Bank contains annotated critiques of answers and explanations from "student" models.

    DS Critique Bank (DSCB) is a dataset of multiple-choice questions with associated answers and explanations provided by "student models", along with "critiques" of the explanations provided by "critique models". Many of the instances have human annotations.

    ParRoT (Parts and Relations of Things)

    11,720 “X relation Y?” True/False questions on parts of everyday things and relational information about these parts

    This is the dataset in "Do language models have coherent mental models of everyday things?", ACL 2023.

    Belief and Reasoning Dataset

    BaRDA: A Belief and REasoning Dataset that Separates Factual Accuracy and Reasoning Ability

    BaRDa is a new belief and reasoning dataset for evaluating the factual correctness ("truth") and reasoning accuracy ("rationality", or "honesty") of new language models. It was created in collaboration with, and with the support of, the Open Philanthropy organization.

    “Knowing is not enough, we must apply. Willing is not enough, we must do.”
    Johann Wolfgang von Goethe

    Persona-driven ChatGPT yields toxic, racist output

    TechXplore
    April 19, 2023
    Read the Article

    Changing ChatGPTs Persona Might Make It Malicious

    Digital Information World
    April 17, 2023
    Read the Article

    This AI Paper Shows How ChatGPT’s Toxicity Can Increase Up To Six-Fold When Assigned A Persona

    Marktechpost
    April 14, 2023
    Read the Article

    'They’re All So Dirty and Smelly:' Study Unlocks ChatGPT's Inner Racist

    Gizmodo
    April 13, 2023
    Read the Article

    New study reveals ChatGPT's inherent toxicity when assigned different personas

    Mashable Middle East
    April 13, 2023
    Read the Article

    ChatGPT can turn toxic just by changing its assigned persona, researchers say

    VentureBeat
    April 12, 2023
    Read the Article

    Researchers discover a way to make ChatGPT consistently toxic

    TechCrunch
    April 12, 2023
    Read the Article

    Researchers From Allen Institute for AI Introduce TeachMe: A Framework To Understand And Correct AI Models

    Marktechpost
    January 17, 2023
    Read the Article

    Team

    • personal photoChris Callison-BurchResearch
    • personal photoPeter ClarkResearch
    • Personal photoBen BoginYoung Investigator
    • profile pictureBhavana DalviResearch
    • personal photoYuling GuPredoctoral Young Investigator
    • personal photoShashank GuptaResearch
    • Ashwin Kalyan's Profile PhotoAshwin KalyanResearch
    • Tushar Khot's Profile PhotoTushar KhotResearch
    • personal photoBodhisattwa Prasad MajumderResearch
    • Kyle Richardson's Profile PhotoKyle RichardsonResearch
    • Ashish Sabharwal's Profile PhotoAshish SabharwalResearch
    • Oyvind Tafjord's Profile PhotoOyvind TafjordResearch
    • Niket Tandon's Profile PhotoNiket TandonResearch
    • personal photoSarah WiegreffeYoung Investigator