Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Document-Level Definition Detection in Scholarly Documents: Existing Models, Error Analyses, and Future Directions

Dongyeop KangAndrew HeadRisham SidhuMarti A. Hearst
2020
EMNLP • SDP workshop

The task of definition detection is important for scholarly papers, because papers often make use of technical terminology that may be unfamiliar to readers. Despite prior work on definition… 

PySBD: Pragmatic Sentence Boundary Disambiguation

Nipun SadvilkarM. Neumann
2020
EMNLP • NLP-OSS Workshop

In this paper, we present a rule-based sentence boundary disambiguation Python package that works out-of-the-box for 22 languages. We aim to provide a realistic segmenter which can provide logical… 

The Extraordinary Failure of Complement Coercion Crowdsourcing

Yanai ElazarVictoria BasmovShauli RavfogelReut Tsarfaty
2020
EMNLP • Insights from Negative Results in NLP Workshop

Crowdsourcing has eased and scaled up the collection of linguistic annotation in recent years. In this work, we follow known methodologies of collecting labeled data for the complement coercion… 

A Dataset for Tracking Entities in Open Domain Procedural Text

Niket TandonKeisuke SakaguchiBhavana Dalvi MishraEduard Hovy
2020
EMNLP

We present the first dataset for tracking state changes in procedural text from arbitrary domains by using an unrestricted (open) vocabulary. For example, in a text describing fog removal using… 

A Novel Challenge Set for Hebrew Morphological Disambiguation and Diacritics Restoration

Avi ShmidmanJoshua GuedaliaShaltiel ShmidmanReut Tsarfaty
2020
Findings of EMNLP

One of the primary tasks of morphological parsers is the disambiguation of homographs. Particularly difficult are cases of unbalanced ambiguity, where one of the possible analyses is far more… 

A Simple and Effective Model for Answering Multi-span Questions

Elad SegalAvia EfratMor ShohamJonathan Berant
2020
EMNLP

Models for reading comprehension (RC) commonly restrict their output space to the set of all single contiguous spans from the input, in order to alleviate the learning problem and avoid the need for… 

A Simple Yet Strong Pipeline for HotpotQA

Dirk GroeneveldTushar KhotMausamAshish Sabharwal
2020
EMNLP

State-of-the-art models for multi-hop question answering typically augment large-scale language models like BERT with additional, intuitively useful capabilities such as named entity recognition,… 

Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning

Lianhui QinVered ShwartzP. WestYejin Choi
2020
EMNLP

Abductive and counterfactual reasoning, core abilities of everyday human cognition, require reasoning about what might have happened at time t, while conditioning on multiple contexts from the… 

Beyond Instructional Videos: Probing for More Diverse Visual-Textual Grounding on YouTube

Jack HesselZ. ZhuBo PangRadu Soricut
2020
EMNLP

Pretraining from unlabelled web videos has quickly become the de-facto means of achieving high performance on many video understanding tasks. Features are learned via prediction of grounded… 

CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning

Bill Yuchen LinM. ShenWangchunshu ZhouX. Ren
2020
EMNLP

Recently, large-scale pre-trained language models have demonstrated impressive performance on several commonsense benchmark datasets. However, building machines with common-sense to compose…