Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Time Waits for No One! Analysis and Challenges of Temporal Misalignment

Kelvin LuuDaniel KhashabiSuchin GururanganNoah A. Smith
2022
NAACL

When an NLP model is trained on text data from one time period and tested or deployed on data from another, the resulting temporal misalignment can degrade end-task performance. In this work, we… 

NeuroLogic A*esque Decoding: Constrained Text Generation with Lookahead Heuristics

Ximing LuS. WelleckPeter WestYejin Choi
2022
NAACL

The dominant paradigm for neural text generation is left-to-right decoding from autoregressive language models. Constrained or controllable generation under complex lexical constraints, however,… 

Few-Shot Self-Rationalization with Natural Language Prompts

Ana MarasovićIz BeltagyDoug DowneyMatthew E. Peters
2022
Findings of NAACL

Self-rationalization models that predict task labels and generate free-text elaborations for their predictions could enable more intuitive interaction with NLP systems. These models are, however,… 

Transparent Human Evaluation for Image Captioning

Jungo KasaiKeisuke SakaguchiLavinia DunaganNoah A. Smith
2022
NAACL

We establish a rubric-based human evaluation protocol for image captioning models. Our scoring rubrics and their definitions are carefully developed based on machineand humangenerated captions on… 

Bidimensional Leaderboards: Generate and Evaluate Language Hand in Hand

Jungo KasaiKeisuke SakaguchiRonan Le BrasNoah A. Smith
2022
NAACL

Natural language processing researchers have identified limitations of evaluation methodology for generation tasks, with new questions raised about the validity of automatic metrics and of… 

Symbolic Knowledge Distillation: from General Language Models to Commonsense Models

Peter WestChandrasekhar BhagavatulaJack HesselYejin Choi
2022
NAACL

The common practice for training commonsense models has gone from–human–to– corpus–to–machine: humans author commonsense knowledge graphs in order to train commonsense models. In this work, we… 

A Dataset for N-ary Relation Extraction of Drug Combinations

Aryeh TiktinskyVijay ViswanathanDanna NiezniYoav Goldberg
2022
NAACL

Combination therapies have become the standard of care for diseases such as cancer, tuberculosis, malaria and HIV. However, the combinatorial set of available multi-drug treatments creates a… 

Connecting the Dots between Audio and Text without Parallel Data through Visual Knowledge Transfer

Yanpeng ZhaoJack HesselYoungjae YuYejin Choi
2022
NAACL

Machines that can represent and describe environmental soundscapes have practical poten-tial, e.g., for audio tagging and captioning. Pre-vailing learning paradigms of audio-text connections have… 

Reframing Human-AI Collaboration for Generating Free-Text Explanations

Sarah WiegreffeJack HesselSwabha SwayamdiptaYejin Choi
2022
NAACL

Large language models are increasingly capa-ble of generating fluent-appearing text with relatively little task-specific supervision. But can these models accurately explain classification decisions?… 

Weakly Supervised Text-to-SQL Parsing through Question Decomposition

Tomer WolfsonDaniel DeutchJonathan Berant
2022
Findings of NAACL

Text-to-SQL parsers are crucial in enabling non-experts to effortlessly query relational data. Training such parsers, by contrast, generally requires expertise in annotating natural language (NL)…