Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

EARLY FUSION for Goal Directed Robotic Vision

Aaron WalsmanYonatan BiskSaadia GabrielD. Fox
2018
IROS

Building perceptual systems for robotics which perform well under tight computational budgets requires novel architectures which rethink the traditional computer vision pipeline. Modern vision… 

Adversarial Removal of Demographic Attributes from Text Data

Yanai ElazarYoav Goldberg
2018
EMNLP

Recent advances in Representation Learning and Adversarial Training seem to succeed in removing unwanted features from the learned representation. We show that demographic information of authors is… 

Bridging Knowledge Gaps in Neural Entailment via Symbolic Models

Dongyeop KangTushar KhotAshish Sabharwal and Peter Clark
2018
EMNLP

Most textual entailment models focus on lexical gaps between the premise text and the hypothesis, but rarely on knowledge gaps. We focus on filling these knowledge gaps in the Science Entailment… 

Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering

Todor MihaylovPeter ClarkTushar KhotAshish Sabharwal
2018
EMNLP

We present a new kind of question answering dataset, OpenBookQA, modeled after open book exams for assessing human understanding of a subject. The open book that comes with our questions is a set of… 

Can LSTM Learn to Capture Agreement? The Case of Basque

Shauli RavfogelFrancis M. TyersYoav Goldberg
2018
EMNLP • Workshop: Analyzing and interpreting neural networks for NLP

Sequential neural networks models are powerful tools in a variety of Natural Language Processing (NLP) tasks. The sequential nature of these models raises the questions: to what extent can these… 

Decoupling Structure and Lexicon for Zero-Shot Semantic Parsing

Jonathan HerzigJonathan Berant
2018
EMNLP

Building a semantic parser quickly in a new domain is a fundamental challenge for conversational interfaces, as current semantic parsers require expensive supervision and lack the ability to… 

Dissecting Contextual Word Embeddings: Architecture and Representation

Matthew PetersMark NeumannWen-tau Yihand Luke Zettlemoyer
2018
EMNLP

Contextual word representations derived from pre-trained bidirectional language models (biLMs) have recently been shown to provide significant improvements to the state of the art for a wide range… 

Neural Cross-Lingual Named Entity Recognition with Minimal Resources

Jiateng XieZhilin YangGraham NeubigJaime Carbonell
2018
EMNLP

For languages with no annotated resources, unsupervised transfer of natural language processing models such as named-entity recognition (NER) from resource-rich languages would be an appealing… 

Neural Metaphor Detection in Context

Ge GaoEunsol ChoiYejin Choi and Luke Zettlemoyer
2018
EMNLP

We present end-to-end neural models for detecting metaphorical word use in context. We show that relatively standard BiLSTM models which operate on complete sentences work well in this setting, in… 

Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations

Dipendra MisraMing-Wei ChangXiaodong HeWen-tau Yih
2018
EMNLP

Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model…