Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker

Melanie SclarSachin KumarPeter WestYulia Tsvetkov
2023
ACL

Theory of Mind (ToM)$\unicode{x2014}$the ability to reason about the mental states of other people$\unicode{x2014}$is a key element of our social intelligence. Yet, despite their ever more… 

LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization

Kalpesh KrishnaErin BransomBailey KuehlKyle Lo
2023
EACL

While human evaluation remains best practice for accurately judging the faithfulness of automatically-generated summaries, few solutions exist to address the increased difficulty and workload when… 

CiteSee: Augmenting Citations in Scientific Papers with Persistent and Personalized Historical Context

Joseph Chee ChangAmy X. ZhangJonathan BraggDaniel S. Weld
2023
CHI

When reading a scholarly article, inline citations help researchers contextualize the current article and discover relevant prior work. However, it can be challenging to prioritize and make sense of… 

Queer In AI: A Case Study in Community-Led Participatory AI

Organizers Of Queer in AIAnaelia OvalleArjun SubramonianLuke Stark
2023
FAccT

We present Queer in AI as a case study for community-led participatory design in AI. We examine how participatory design and intersectional tenets started and shaped this community's programs over… 

Abstract Visual Reasoning with Tangram Shapes

Anya JiNoriyuki KojimaN. RushYoav Artzi
2022
EMNLP

We introduce KiloGram, a resource for studying abstract visual reasoning in humans and machines. Drawing on the history of tangram puzzles as stimuli in cognitive science, we build a richly… 

CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation

Abhilasha RavichanderMatt GardnerAna Marasović
2022
EMNLP

The full power of human language-based communication cannot be realized without negation. All human languages have some form of negation. Despite this, negation remains a challenging phenomenon for… 

ProcTHOR: Large-Scale Embodied AI Using Procedural Generation

Matt DeitkeEli VanderBiltAlvaro HerrastiRoozbeh Mottaghi
2022
NeurIPS

Massive datasets and high-capacity models have driven many recent advancements in computer vision and natural language understanding. This work presents a platform to enable similar success stories… 

Robust fine-tuning of zero-shot models

Mitchell WortsmanGabriel IlharcoMike LiLudwig Schmidt
2022
CVPR

Large pre-trained models such as CLIP or ALIGN offer consistent accuracy across a range of data distributions when performing zero-shot inference (i.e., without fine-tuning on a specific dataset).… 

NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics

Ximing LuS. WelleckPeter WestYejin Choi
2022
NAACL

The dominant paradigm for neural text generation is left-to-right decoding from autoregressive language models. Constrained or controllable generation under complex lexical constraints, however,… 

Understanding Dataset Difficulty with 𝒱-Usable Information

Kawin EthayarajhYejin Choiand Swabha Swayamdipta
2022
ICML

Estimating the difficulty of a dataset typically involves comparing state-of-the-art models to humans; the bigger the performance gap, the harder the dataset is said to be. However, this comparison…