Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Green AI

Roy SchwartzJesse DodgeNoah A. SmithOren Etzioni
2020
CACM

The computations required for deep learning research have been doubling every few months, resulting in an estimated 300,000x increase from 2012 to 2018 [2]. These computations have a surprisingly… 

From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project

Peter ClarkOren EtzioniDaniel KhashabiMichael Schmitz
2020
AI Magazine

AI has achieved remarkable mastery over games such as Chess, Go, and Poker, and even Jeopardy!, but the rich variety of standardized exams has remained a landmark challenge. Even in 2016, the best… 

Do Neural Language Models Overcome Reporting Bias?

Vered Shwartz and Yejin Choi
2020
Proceedings of the 28th International Conference on Computational Linguistics

Mining commonsense knowledge from corpora suffers from reporting bias, over-representing the rare at the expense of the trivial (Gordon and Van Durme, 2013). We study to what extent pre-trained… 

Mitigating Biases in CORD-19 for Analyzing COVID-19 Literature

Anshul KanakiaKuansan WangYuxiao DongChieh-Han Wu
2020
Frontiers in Research Metrics and Analytics

On the behest of the Office of Science and Technology Policy in the White House, six institutions, including ours, have created an open research dataset called COVID-19 Research Dataset (CORD-19) to… 

Neural Natural Language Inference Models Partially Embed Theories of Lexical Entailment and Negation

Atticus GeigerKyle RichardsonChristopher Potts
2020
EMNLP • BlackboxNLP Workshop

We address whether neural models for Natural Language Inference (NLI) can learn the compositional interactions between lexical entailment and negation, using four methods: the behavioral evaluation… 

It's not Greek to mBERT: Inducing Word-Level Translations from Multilingual BERT

Hila GonenShauli RavfogelYanai ElazarYoav Goldberg
2020
EMNLP • BlackboxNLP Workshop

Recent works have demonstrated that multilingual BERT (mBERT) learns rich cross-lingual representations, that allow for transfer across languages. We study the word-level translation information… 

Unsupervised Distillation of Syntactic Information from Contextualized Word Representations

Shauli RavfogelYanai ElazarJacob GoldbergerYoav Goldberg
2020
EMNLP • BlackboxNLP Workshop

Contextualized word representations, such as ELMo and BERT, were shown to perform well on various semantic and syntactic task. In this work, we tackle the task of unsupervised disentanglement… 

PySBD: Pragmatic Sentence Boundary Disambiguation

Nipun SadvilkarM. Neumann
2020
EMNLP • NLP-OSS Workshop

In this paper, we present a rule-based sentence boundary disambiguation Python package that works out-of-the-box for 22 languages. We aim to provide a realistic segmenter which can provide logical… 

Document-Level Definition Detection in Scholarly Documents: Existing Models, Error Analyses, and Future Directions

Dongyeop KangAndrew HeadRisham SidhuMarti A. Hearst
2020
EMNLP • SDP workshop

The task of definition detection is important for scholarly papers, because papers often make use of technical terminology that may be unfamiliar to readers. Despite prior work on definition… 

The Extraordinary Failure of Complement Coercion Crowdsourcing

Yanai ElazarVictoria BasmovShauli RavfogelReut Tsarfaty
2020
EMNLP • Insights from Negative Results in NLP Workshop

Crowdsourcing has eased and scaled up the collection of linguistic annotation in recent years. In this work, we follow known methodologies of collecting labeled data for the complement coercion…