Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Entity, Relation, and Event Extraction with Contextualized Span Representations

David WaddenUlme WennbergYi LuanHannaneh Hajishirzi
2019
EMNLP

We examine the capabilities of a unified, multi-task framework for three information extraction tasks: named entity recognition, relation extraction, and event extraction. Our framework (called… 

Everything Happens for a Reason: Discovering the Purpose of Actions in Procedural Text

Bhavana Dalvi MishraNiket TandonAntoine BosselutPeter Clark
2019
EMNLP

Our goal is to better comprehend procedural text, e.g., a paragraph about photosynthesis, by not only predicting what happens, but why some actions need to happen before others. Our approach builds… 

Global Reasoning over Database Structures for Text-to-SQL Parsing

Ben BoginMatt GardnerJonathan Berant
2019
EMNLP

State-of-the-art semantic parsers rely on auto-regressive decoding, emitting one symbol at a time. When tested against complex databases that are unobserved at training time (zero-shot), the parser… 

“Going on a vacation” takes longer than “Going for a walk”: A Study of Temporal Commonsense Understanding

Ben ZhouDaniel KhashabiQiang NingDan Roth
2019
EMNLP

Understanding time is crucial for understanding events expressed in natural language. Because people rarely say the obvious, it is often necessary to have commonsense knowledge about various… 

Knowledge Enhanced Contextual Word Representations

Matthew E. PetersMark NeumannRobert L. Loganand Noah A. Smith
2019
EMNLP

Contextual word representations, typically trained on unstructured, unlabeled text, do not contain any explicit grounding to real world entities and are often unable to remember facts about those… 

Language Modeling for Code-Switching: Evaluation, Integration of Monolingual Data, and Discriminative Training

Hila GonenYoav Goldberg
2019
EMNLP

We focus on the problem of language modeling for code-switched language, in the context of automatic speech recognition (ASR). Language modeling for code-switched language is challenging for (at… 

Low-Resource Parsing with Crosslingual Contextualized Representations

Phoebe MulcaireJungo KasaiNoah A. Smith
2019
CoNLL

Despite advances in dependency parsing, languages with small treebanks still present challenges. We assess recent approaches to multilingual contextual word representations (CWRs), and compare them… 

Mixture Content Selection for Diverse Sequence Generation

Jaemin ChoMinjoon SeoHannaneh Hajishirzi
2019
EMNLP

Generating diverse sequences is important in many NLP applications such as question generation or summarization that exhibit semantically one-to-many relationships between source and the target… 

On the Limits of Learning to Actively Learn Semantic Representations

Omri KoshorekGabriel StanovskyYichu ZhouVivek Srikumar and Jonathan Berant
2019
CoNLL

One of the goals of natural language understanding is to develop models that map sentences into meaning representations. However, training such models requires expensive annotation of complex… 

PaLM: A Hybrid Parser and Language Model

Hao PengRoy SchwartzNoah A. Smith
2019
EMNLP

We present PaLM, a hybrid parser and neural language model. Building on an RNN language model, PaLM adds an attention layer over text spans in the left context. An unsupervised constituency parser…