Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Y'all should read this! Identifying Plurality in Second-Person Personal Pronouns in English Texts

Gabriel StanovskyRonen Tamari
2019
EMNLP • W-NUT

Distinguishing between singular and plural "you" in English is a challenging task which has potential for downstream applications, such as machine translation or coreference resolution. While formal… 

Robust Navigation with Language Pretraining and Stochastic Sampling

Xiujun LiChunyuan LiQiaolin XiaYejin Choi
2019
EMNLP

Core to the vision-and-language navigation (VLN) challenge is building robust instruction representations and action decoding schemes, which can generalize well to previously unseen instructions and… 

Adversarial Removal of Demographic Attributes from Text Data

Yanai ElazarYoav Goldberg
2018
EMNLP

Recent advances in Representation Learning and Adversarial Training seem to succeed in removing unwanted features from the learned representation. We show that demographic information of authors is… 

Bridging Knowledge Gaps in Neural Entailment via Symbolic Models

Dongyeop KangTushar KhotAshish Sabharwal and Peter Clark
2018
EMNLP

Most textual entailment models focus on lexical gaps between the premise text and the hypothesis, but rarely on knowledge gaps. We focus on filling these knowledge gaps in the Science Entailment… 

Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering

Todor MihaylovPeter ClarkTushar KhotAshish Sabharwal
2018
EMNLP

We present a new kind of question answering dataset, OpenBookQA, modeled after open book exams for assessing human understanding of a subject. The open book that comes with our questions is a set of… 

Can LSTM Learn to Capture Agreement? The Case of Basque

Shauli RavfogelFrancis M. TyersYoav Goldberg
2018
EMNLP • Workshop: Analyzing and interpreting neural networks for NLP

Sequential neural networks models are powerful tools in a variety of Natural Language Processing (NLP) tasks. The sequential nature of these models raises the questions: to what extent can these… 

Decoupling Structure and Lexicon for Zero-Shot Semantic Parsing

Jonathan HerzigJonathan Berant
2018
EMNLP

Building a semantic parser quickly in a new domain is a fundamental challenge for conversational interfaces, as current semantic parsers require expensive supervision and lack the ability to… 

Dissecting Contextual Word Embeddings: Architecture and Representation

Matthew PetersMark NeumannWen-tau Yihand Luke Zettlemoyer
2018
EMNLP

Contextual word representations derived from pre-trained bidirectional language models (biLMs) have recently been shown to provide significant improvements to the state of the art for a wide range… 

Neural Cross-Lingual Named Entity Recognition with Minimal Resources

Jiateng XieZhilin YangGraham NeubigJaime Carbonell
2018
EMNLP

For languages with no annotated resources, unsupervised transfer of natural language processing models such as named-entity recognition (NER) from resource-rich languages would be an appealing… 

Neural Metaphor Detection in Context

Ge GaoEunsol ChoiYejin Choi and Luke Zettlemoyer
2018
EMNLP

We present end-to-end neural models for detecting metaphorical word use in context. We show that relatively standard BiLSTM models which operate on complete sentences work well in this setting, in…