Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

The Abduction of Sherlock Holmes: A Dataset for Visual Abductive Reasoning

Jack HesselJena D. HwangJae Sung ParkYejin Choi
2022
ECCV

Humans have remarkable capacity to reason abductively and hypothesize about what lies beyond the literal content of an image. By identifying concrete visual clues scattered throughout a scene, we… 

A Dataset of Alt Texts from HCI Publications

Sanjana ChintalapatiJonathan BraggLucy Lu Wang
2022
ASSETS

Figures in scientifc publications contain important information and results, and alt text is needed for blind and low vision readers to engage with their content. We conduct a study to characterize… 

NeuroCounterfactuals: Beyond Minimal-Edit Counterfactuals for Richer Data Augmentation

Phillip HowardGadi SingerVasudev LalSwabha Swayamdipta
2022
Conference on Empirical Methods in Natural Language Processing

While counterfactual data augmentation offers a promising step towards robust generalization in natural language processing, producing a set of counterfactuals that offer valuable inductive bias for… 

Rainier: Reinforced Knowledge Introspector for Commonsense Question Answering

Jiacheng LiuSkyler HallinanXiming LuYejin Choi
2022
Conference on Empirical Methods in Natural Language Processing

Knowledge underpins reasoning. Recent research demonstrates that when relevant knowledge is provided as additional context to commonsense question answering (QA), it can substantially enhance the… 

Towards Disturbance-Free Visual Mobile Manipulation

Tianwei NiKiana EhsaniLuca WeihsJordi Salvador
2022
arXiv

Deep reinforcement learning has shown promising results on an abundance of robotic tasks in simulation, including visual navigation and manipulation. Prior work generally aims to build embodied… 

Benchmarking Progress to Infant-Level Physical Reasoning in AI

Luca WeihsAmanda Rose YuileRenée BaillargeonAniruddha Kembhavi
2022
TMLR

To what extent do modern AI systems comprehend the physical world? We introduce the open-access Infant-Level Physical Reasoning Benchmark ( InfLevel ) to gain insight into this question. We evaluate… 

Transparency Helps Reveal When Language Models Learn Meaning

Zhaofeng WuWill MerrillHao PengNoah A. Smith
2022
arXiv

Many current NLP systems are built from language models trained to optimize unsupervised objectives on large amounts of raw text. Under what conditions might such a procedure acquire meaning? Our… 

REV: Information-Theoretic Evaluation of Free-Text Rationales

Hanjie ChenFaeze BrahmanXiang RenSwabha Swayamdipta
2022
arXiv

information. Future work might explore evaluation that penalizes rationales which support incorrect predictions, thus bridging together predictive performance with interpretability metrics.

Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization

R. RamamurthyPrithviraj AmmanabroluKianté BrantleyYejin Choi
2022
ArXiv

We tackle the problem of aligning pre-trained large language models (LMs) with human preferences. If we view text generation as a sequential decision-making problem, reinforcement learning (RL)… 

Machine-learned climate model corrections from a global storm-resolving model: Performance across the annual cycle

Anna KwaSpencer. K. ClarkBrian Hennand Christopher S. Bretherton
2022
ESSOAr

One approach to improving the accuracy of a coarse-grid global climate model is to add machine-learned state-dependent corrections to the prognosed model tendencies, such that the climate model…