Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

PROMPT WAYWARDNESS: The Curious Case of Discretized Interpretation of Continuous Prompts

Daniel KhashabiShan LyuSewon MinYejin Choi
2022
NAACL

Fine-tuning continuous prompts for target tasks has recently emerged as a compact alternative to full model fine-tuning. Motivated by these promising results, we investigate the feasibility of… 

DREAM: Improving Situational QA by First Elaborating the Situation

Yuling GuBhavana Dalvi MishraPeter Clark
2021
NAACL

When people answer questions about a specific situation, e.g., "I cheated on my mid-term exam last week. Was that wrong?", cognitive science suggests that they form a mental picture of that… 

BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief

Nora KassnerOyvind TafjordH. SchutzeP. Clark
2021
EMNLP

Although pretrained language models (PTLMs) have been shown to contain significant amounts of world knowledge, they can still produce inconsistent answers to questions when probed, even after using… 

Explaining Answers with Entailment Trees

Bhavana DalviPeter A. JansenOyvind TafjordPeter Clark
2021
EMNLP

Our goal, in the context of open-domain textual question-answering (QA), is to explain answers by not just listing supporting textual evidence (“rationales”), but also showing how such evidence… 

GooAQ: Open Question Answering with Diverse Answer Types

Daniel KhashabiAmos NgTushar KhotChris Callison-Burch
2021
Findings of EMNLP

While day-to-day questions come with a variety of answer types, the current questionanswering (QA) literature has failed to adequately address the answer diversity of questions. To this end, we… 

How Much Coffee Was Consumed During EMNLP 2019? Fermi Problems: A New Reasoning Challenge for AI

A. KalyanAbhinav KumarArjun ChandrasekaranPeter Clark
2021
EMNLP

Many real-world problems require the combined application of multiple reasoning abilities employing suitable abstractions, commonsense knowledge, and creative synthesis of problem-solving… 

proScript: Partially Ordered Scripts Generation

Keisuke SakaguchiChandra BhagavatulaRonan Le BrasYejin Choi
2021
EMNLP • Findings

Scripts standardized event sequences describing typical everyday activities have been shown to help understand narratives by providing expectations, resolving ambiguity, and filling in unstated… 

Think about it! Improving defeasible reasoning by first modeling the question scenario

Aman MadaanNiket TandonDheeraj RajagopalE. Hovy
2021
EMNLP

Defeasible reasoning is the mode of reasoning where conclusions can be overturned by taking into account new evidence. Existing cognitive science literature on defeasible reasoning suggests that a… 

Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?

Jieyu ZhaoDaniel KhashabiTushar KhotAshish Sabharwal and Kai-Wei Chang
2021
ACL-IJCNLP

Is it possible to use natural language to intervene in a model’s behavior and alter its prediction in a desired way? We investigate the effectiveness of natural language interventions for… 

Investigating Transfer Learning in Multilingual Pre-trained Language Models through Chinese Natural Language Inference

Hai HuHe ZhouZuoyu TianKyle Richardson
2021
Findings of ACL

Multilingual transformers (XLM, mT5) have been shown to have remarkable transfer skills in zero-shot settings. Most transfer studies, however, rely on automatically translated resources (XNLI,…