Aristo

Building the next generation of systems that can systematically reason, explain, and continually improve over time


Diagram showing entailment tree from hypothesis and text
Our research includes pioneering work on:
  • Systematic reasoning and explanation
  • Teachable reasoning systems
  • Continual learning with memory-based architectures
  • Knowledge and belief
  • Universal mathematical reasoning

Recent Updates

Research Areas

Teachable Reasoning Systems

By interacting with and giving feedback on a system’s reasoning, a user can teach the system so it continually improves over time – without model retraining.

Modular Models

By learning to chain together existing models, complex problems can be solved, beyond the capabilities of the individual components.

Universal Mathematical Reasoners

Creating models with built-in mathematical reasoning skills, that can be rapidly fine-tuned for a wide variety of mathematical tasks.

  • A QA model that outperforms other popular language models while being an order of magnitude smaller | Aristo, Research Visualization

    Macaw is a high-performance question-answering (QA) model capable of outperforming other popular current language models, all while being an order of magnitude smaller. This demo allows you to explore Macaw's answers and compare them to those of the popular GPT-3 language model on a benchmark set of questions.

    Try the demo
    Macaw
  • Macaw
    A QA model that outperforms other popular language models while being an order of magnitude smaller | Aristo, Research Visualization

    Macaw is a high-performance question-answering (QA) model capable of outperforming other popular current language models, all while being an order of magnitude smaller. This demo allows you to explore Macaw's answers and compare them to those of the popular GPT-3 language model on a benchmark set of questions.

    Try the demo
  • ProofWriter OpenGraph image
    Generating Implications, Proofs, and Abductive Statements over Natural Language | Aristo

    Like RuleTaker, ProofWriter determines whether statements are True or False based on rules given in natural language - but also generates the proof of its answers.

    Try the demo
  • ProofWriter OpenGraph image
    Generating Implications, Proofs, and Abductive Statements over Natural Language | Aristo

    Like RuleTaker, ProofWriter determines whether statements are True or False based on rules given in natural language - but also generates the proof of its answers.

    Try the demo
    • ADaPT: As-Needed Decomposition and Planning with Language Models

      Archiki Prasad, Alexander Koller, Mareike Hartmann, Peter Clark, Ashish Sabharwal, Mohit Bansal, Tushar KhotNAACL Findings2024 Large Language Models (LLMs) are increasingly being used for interactive decision-making tasks requiring planning and adapting to the environment. Recent works employ LLMs-as-agents in broadly two ways: iteratively determining the next action (iterative…
    • Leveraging Code to Improve In-context Learning for Semantic Parsing

      Ben Bogin, Shivanshu Gupta, Peter Clark, Ashish SabharwalNAACL2024 In-context learning (ICL) is an appealing approach for semantic parsing due to its few-shot nature and improved generalization. However, learning to parse to rare domain-specific languages (DSLs) from just a few demonstrations is challenging, limiting the…
    • QualEval: Qualitative Evaluation for Model Improvement

      Vishvak Murahari, Ameet Deshpande, Peter Clark, Tanmay Rajpurohit, Ashish Sabharwal, Karthik Narasimhan, Ashwin KalyanNAACL2024 Quantitative evaluation metrics have traditionally been pivotal in gauging the advancements of artificial intelligence systems, including large language models (LLMs). However, these metrics have inherent limitations. Given the intricate nature of real-world…
    • Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs

      Shashank Gupta, Vaishnavi Shrivastava, A. Deshpande, A. Kalyan, Peter Clark, Ashish Sabharwal, Tushar KhotICLR2024 Recent works have showcased the ability of LLMs to embody diverse personas in their responses, exemplified by prompts like 'You are Yoda. Explain the Theory of Relativity.' While this ability allows personalization of LLMs and enables human behavior…
    • The Expressive Power of Transformers with Chain of Thought

      William Merrill, Ashish SabharwalICLR2024 Recent theoretical work has identified surprisingly simple reasoning problems, such as checking if two nodes in a graph are connected or simulating finite-state machines, that are provably unsolvable by standard transformers that answer immediately after…

    IfQA Counterfactual Reasoning Benchmark

    3,800 open-domain questions designed to assess counterfactual reasoning abilities of NLP models

    Counterfactual reasoning benchmark introduced in the EMNLP-2023 paper titled "IfQA: A Dataset for Open-domain Question Answering under Counterfactual Presuppositions".

    Digital Socrates

    DS Critique Bank contains annotated critiques of answers and explanations from "student" models.

    DS Critique Bank (DSCB) is a dataset of multiple-choice questions with associated answers and explanations provided by "student models", along with "critiques" of the explanations provided by "critique models". Many of the instances have human annotations.

    ParRoT (Parts and Relations of Things)

    11,720 “X relation Y?” True/False questions on parts of everyday things and relational information about these parts

    This is the dataset in "Do language models have coherent mental models of everyday things?", ACL 2023.

    Belief and Reasoning Dataset

    BaRDA: A Belief and REasoning Dataset that Separates Factual Accuracy and Reasoning Ability

    BaRDa is a new belief and reasoning dataset for evaluating the factual correctness ("truth") and reasoning accuracy ("rationality", or "honesty") of new language models. It was created in collaboration with, and with the support of, the Open Philanthropy organization.

    “Knowing is not enough, we must apply. Willing is not enough, we must do.”
    Johann Wolfgang von Goethe

    How Chain-of-Thought Reasoning Helps Neural Networks Compute

    Quanta Magazine
    March 21, 2024
    Read the Article

    Persona-driven ChatGPT yields toxic, racist output

    TechXplore
    April 19, 2023
    Read the Article

    Changing ChatGPTs Persona Might Make It Malicious

    Digital Information World
    April 17, 2023
    Read the Article

    This AI Paper Shows How ChatGPT’s Toxicity Can Increase Up To Six-Fold When Assigned A Persona

    Marktechpost
    April 14, 2023
    Read the Article

    'They’re All So Dirty and Smelly:' Study Unlocks ChatGPT's Inner Racist

    Gizmodo
    April 13, 2023
    Read the Article

    New study reveals ChatGPT's inherent toxicity when assigned different personas

    Mashable Middle East
    April 13, 2023
    Read the Article

    Researchers discover a way to make ChatGPT consistently toxic

    TechCrunch
    April 12, 2023
    Read the Article

    ChatGPT can turn toxic just by changing its assigned persona, researchers say

    VentureBeat
    April 12, 2023
    Read the Article

    Team

    • personal photoChris Callison-BurchResearch
    • personal photoPeter ClarkResearch
    • Personal photoBen BoginYoung Investigator
    • profile pictureBhavana DalviResearch
    • personal photoYuling GuPredoctoral Young Investigator
    • personal photoShashank GuptaResearch
    • Ashwin Kalyan's Profile PhotoAshwin KalyanResearch
    • Tushar Khot's Profile PhotoTushar KhotResearch
    • personal photoBodhisattwa Prasad MajumderResearch
    • Kyle Richardson's Profile PhotoKyle RichardsonResearch
    • Ashish Sabharwal's Profile PhotoAshish SabharwalResearch
    • Oyvind Tafjord's Profile PhotoOyvind TafjordResearch
    • Niket Tandon's Profile PhotoNiket TandonResearch
    • personal photoSarah WiegreffeYoung Investigator