Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Competency Problems: On Finding and Removing Artifacts in Language Data

Matt GardnerWilliam Cooper MerrillJesse DodgeNoah A. Smith
2021
EMNLP

Much recent work in NLP has documented dataset artifacts, bias, and spurious correlations between input features and output labels. However, how to tell which features have “spurious” instead of… 

Expected Validation Performance and Estimation of a Random Variable's Maximum

Jesse DodgeSuchin GururanganD. CardNoah A. Smith
2021
Findings of EMNLP

Research in NLP is often supported by experimental results, and improved reporting of such results can lead to better understanding and more reproducible science. In this paper we analyze three… 

COVR: A test-bed for Visually Grounded Compositional Generalization with real images

Ben BoginShivanshu GuptaMatt GardnerJonathan Berant
2021
EMNLP

While interest in models that generalize at test time to new compositions has risen in recent years, benchmarks in the visually-grounded domain have thus far been restricted to synthetic images. In… 

Conversational Multi-Hop Reasoning with Neural Commonsense Knowledge and Symbolic Logic Rules

Forough ArabshahiJennifer LeeA. BosselutTom Mitchell
2021
EMNLP

One of the challenges faced by conversational agents is their inability to identify unstated presumptions of their users’ commands, a task trivial for humans due to their common sense. In this… 

CLIPScore: A Reference-free Evaluation Metric for Image Captioning

Jack HesselAri HoltzmanMaxwell ForbesYejin Choi
2021
EMNLP

Image captioning has conventionally relied on reference-based automatic evaluations, where machine captions are compared against captions written by humans. This is in contrast to the reference-free… 

It's not Greek to mBERT: Inducing Word-Level Translations from Multilingual BERT

Hila GonenShauli RavfogelYanai ElazarYoav Goldberg
2020
EMNLP • BlackboxNLP Workshop

Recent works have demonstrated that multilingual BERT (mBERT) learns rich cross-lingual representations, that allow for transfer across languages. We study the word-level translation information… 

Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and Negation

Atticus GeigerKyle RichardsonChristopher Potts
2020
EMNLP • BlackboxNLP Workshop

We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation… 

Unsupervised Distillation of Syntactic Information from Contextualized Word Representations

Shauli RavfogelYanai ElazarJacob GoldbergerYoav Goldberg
2020
EMNLP • BlackboxNLP Workshop

Contextualized word representations, such as ELMo and BERT, were shown to perform well on various semantic and syntactic task. In this work, we tackle the task of unsupervised disentanglement… 

Document-Level Definition Detection in Scholarly Documents: Existing Models, Error Analyses, and Future Directions

Dongyeop KangAndrew HeadRisham SidhuMarti A. Hearst
2020
EMNLP • SDP workshop

The task of definition detection is important for scholarly papers, because papers often make use of technical terminology that may be unfamiliar to readers. Despite prior work on definition… 

PySBD: Pragmatic Sentence Boundary Disambiguation

Nipun SadvilkarM. Neumann
2020
EMNLP • NLP-OSS Workshop

In this paper, we present a rule-based sentence boundary disambiguation Python package that works out-of-the-box for 22 languages. We aim to provide a realistic segmenter which can provide logical…