Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Modeling Context With Linear Attention for Scalable Document-Level Translation

Zhaofeng WuHao PengNikolaos PappasNoah A. Smith
2022
Findings of EMNLP

Document-level machine translation leverages inter-sentence dependencies to produce more coherent and consistent translations. However, these models, predominantly based on transformers, are… 

Lexical Generalization Improves with Larger Models and Longer Training

Elron BandelYoav GoldbergYanai Elazar
2022
Finding of EMNLP

While fine-tuned language models perform well on many tasks, they were also shown to rely on superficial surface features such as lexical overlap. Excessive utilization of such heuristics can lead to… 

Towards Teachable Reasoning Systems: Using a Dynamic Memory of User Feedback for Continual System Improvement

Bhavana Dalvi MishraOyvind TafjordPeter Clark
2022
EMNLP

Our goal is a teachable reasoning system for question-answering (QA), where a user can interact with faithful answer explanations, and correct its errors so that the system improves over time. Our… 

Entailer: Answering Questions with Faithful and Truthful Chains of Reasoning

Oyvind TafjordBhavana Dalvi MishraPeter Clark
2022
EMNLP

Our goal is a question-answering (QA) system that can show how its answers are implied by its own internal beliefs via a systematic chain of reasoning . Such a capability would allow better… 

Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection

Suchin GururanganDallas CardSarah K. DrierNoah A. Smith
2022
EMNLP

Language models increasingly rely on massive web dumps for diverse text data. However, these sources are rife with undesirable content. As such, resources like Wikipedia, books, and news often… 

UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models

Tianbao XieChen Henry WuPeng ShiTao Yu
2022
EMNLP

Structured knowledge grounding (SKG) leverages structured knowledge to complete user requests, such as semantic parsing over databases and question answering over knowledge bases. Since the inputs… 

Twist Decoding: Diverse Generators Guide Each Other

Jungo KasaiKeisuke SakaguchiRonan Le BrasNoah A. Smith
2022
EMNLP

Natural language generation technology has recently seen remarkable progress with large-scale training, and many natural language applications are now built upon a wide range of generation models.… 

Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks

Yizhong WangSwaroop MishraPegah AlipoormolabashiDaniel Khashabi
2022
EMNLP

How well can NLP models generalize to a variety of unseen tasks when provided with task instructions? To address this question, we first introduce SUPER-NATURALINSTRUCTIONS, a benchmark of 1,616… 

GENIE: Toward Reproducible and Standardized Human Evaluation for Text Generation

Daniel KhashabiGabriel StanovskyJonathan BraggDaniel S. Weld
2022
EMNLP

While often assumed a gold standard, effective human evaluation of text generation remains an important, open area for research. We revisit this problem with a focus on pro-ducing consistent… 

How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers

Michael HassidHao PengDaniel RotemRoy Schwartz
2022
EMNLP Findings

The attention mechanism is considered the backbone of the widely-used Transformer architecture. It contextualizes the input by computing input-specific attention matrices. We find that this…