Aristo
Building the next generation of systems that can systematically reason, explain, and continually improve over time
- Systematic reasoning and explanation
- Teachable reasoning systems
- Continual learning with memory-based architectures
- Knowledge and belief
- Universal mathematical reasoning
Recent Updates
Towards Teachable Reasoning Systems
April 27, 2022This paper describes our work towards Teachable Reasoning Systems. First, EntailmentWriter searches for a chain of reasoning from facts it believes…
Memory-assisted prompt editing to improve GPT-3 after deployment
April 20, 2022Large LMs such as GPT-3 are powerful, but can commit mistakes that are obvious to humans. Memory-assisted prompt editing allows users to give…
DREAM: Improving Situational QA by First Elaborating the Situation
March 1, 2022When people answer questions about a specific situation, e.g., "I cheated on my mid-term exam last week. Was that wrong?", cognitive science suggests…
Explaining Answers with Entailment Trees
November 1, 2021EntailmentBank is a unique dataset of multi-step entailment trees. Each tree shows how known facts combine to entail the answer to a question. From…
BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief
November 1, 2021Although pretrained language models (PTLMs) contain significant amounts of world knowledge, they can still produce inconsistent answers to questions…
Research Areas
Teachable Reasoning Systems
By interacting with and giving feedback on a system’s reasoning, a user can teach the system so it continually improves over time – without model retraining.
Neuro-Symbolic Reasoning and Explanation
Solving problems by generating consistent, faithful chains of reasoning using neural components.
Modular Models
By learning to chain together existing models, complex problems can be solved, beyond the capabilities of the individual components.
Universal Mathematical Reasoners
Creating models with built-in mathematical reasoning skills, that can be rapidly fine-tuned for a wide variety of mathematical tasks.
Macaw is a high-performance question-answering (QA) model capable of outperforming other popular current language models, all while being an order of magnitude smaller. This demo allows you to explore Macaw's answers and compare them to those of the popular GPT-3 language model on a benchmark set of questions.
Try the demoMacaw is a high-performance question-answering (QA) model capable of outperforming other popular current language models, all while being an order of magnitude smaller. This demo allows you to explore Macaw's answers and compare them to those of the popular GPT-3 language model on a benchmark set of questions.
Try the demoLike RuleTaker, ProofWriter determines whether statements are True or False based on rules given in natural language - but also generates the proof of its answers.
Try the demoLike RuleTaker, ProofWriter determines whether statements are True or False based on rules given in natural language - but also generates the proof of its answers.
Try the demoRecent Papers
Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs
Shashank Gupta, Vaishnavi Shrivastava, A. Deshpande, A. Kalyan, Peter Clark, Ashish Sabharwal, Tushar KhotICLR • 2024 Recent works have showcased the ability of LLMs to embody diverse personas in their responses, exemplified by prompts like 'You are Yoda. Explain the Theory of Relativity.' While this ability allows personalization of LLMs and enables human behavior…The Expressive Power of Transformers with Chain of Thought
William Merrill, Ashish SabharwalICLR • 2024 Recent theoretical work has identified surprisingly simple reasoning problems, such as checking if two nodes in a graph are connected or simulating finite-state machines, that are provably unsolvable by standard transformers that answer immediately after…Closing the Curious Case of Neural Text Degeneration
Matthew Finlayson, John Hewitt, Alexander Koller, Swabha Swayamdipta, Ashish SabharwalICLR • 2024 Despite their ubiquity in language generation, it remains unknown why truncation sampling heuristics like nucleus sampling are so effective. We provide a theoretical explanation for the effectiveness of the truncation sampling by proving that truncation…Calibrating Large Language Models with Sample Consistency
Qing Lyu, Kumar Shridhar, Chaitanya Malaviya, Li Zhang, Yanai Elazar, Niket Tandon, Marianna Apidianaki, Mrinmaya Sachan, Chris Callison-BurcharXiv • 2024 Accurately gauging the confidence level of Large Language Models' (LLMs) predictions is pivotal for their reliable application. However, LLMs are often uncalibrated inherently and elude conventional calibration techniques due to their proprietary nature and…TimeArena: Shaping Efficient Multitasking Language Agents in a Time-Aware Simulation
Yikai Zhang, Siyu Yuan, Caiyu Hu, Kyle Richardson, Yanghua Xiao, Jiangjie ChenarXiv • 2024 Despite remarkable advancements in emulating human-like behavior through Large Language Models (LLMs), current textual simulations do not adequately address the notion of time. To this end, we introduce TimeArena, a novel textual simulated environment that…
Recent Datasets
IfQA Counterfactual Reasoning Benchmark
3,800 open-domain questions designed to assess counterfactual reasoning abilities of NLP models
Counterfactual reasoning benchmark introduced in the EMNLP-2023 paper titled "IfQA: A Dataset for Open-domain Question Answering under Counterfactual Presuppositions".
Digital Socrates
DS Critique Bank contains annotated critiques of answers and explanations from "student" models.
DS Critique Bank (DSCB) is a dataset of multiple-choice questions with associated answers and explanations provided by "student models", along with "critiques" of the explanations provided by "critique models". Many of the instances have human annotations.
ParRoT (Parts and Relations of Things)
11,720 “X relation Y?” True/False questions on parts of everyday things and relational information about these parts
This is the dataset in "Do language models have coherent mental models of everyday things?", ACL 2023.
Belief and Reasoning Dataset
BaRDA: A Belief and REasoning Dataset that Separates Factual Accuracy and Reasoning Ability
BaRDa is a new belief and reasoning dataset for evaluating the factual correctness ("truth") and reasoning accuracy ("rationality", or "honesty") of new language models. It was created in collaboration with, and with the support of, the Open Philanthropy organization.
Recent Press
Persona-driven ChatGPT yields toxic, racist output
April 19, 2023
Changing ChatGPTs Persona Might Make It Malicious
April 17, 2023
This AI Paper Shows How ChatGPT’s Toxicity Can Increase Up To Six-Fold When Assigned A Persona
April 14, 2023
'They’re All So Dirty and Smelly:' Study Unlocks ChatGPT's Inner Racist
April 13, 2023
New study reveals ChatGPT's inherent toxicity when assigned different personas
April 13, 2023
ChatGPT can turn toxic just by changing its assigned persona, researchers say
April 12, 2023
Researchers discover a way to make ChatGPT consistently toxic
April 12, 2023
Researchers From Allen Institute for AI Introduce TeachMe: A Framework To Understand And Correct AI Models
January 17, 2023
Team
- Chris Callison-BurchResearch
- Peter ClarkResearch
- Ben BoginYoung Investigator
- Bhavana DalviResearch
- Yuling GuPredoctoral Young Investigator
- Shashank GuptaResearch
- Ashwin KalyanResearch
- Tushar KhotResearch
- Bodhisattwa Prasad MajumderResearch
- Kyle RichardsonResearch
- Ashish SabharwalResearch
- Oyvind TafjordResearch
- Niket TandonResearch
- Sarah WiegreffeYoung Investigator