Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Can LSTM Learn to Capture Agreement? The Case of Basque

Shauli RavfogelFrancis M. TyersYoav Goldberg
2018
EMNLP • Workshop: Analyzing and interpreting neural networks for NLP

Sequential neural networks models are powerful tools in a variety of Natural Language Processing (NLP) tasks. The sequential nature of these models raises the questions: to what extent can these… 

Decoupling Structure and Lexicon for Zero-Shot Semantic Parsing

Jonathan HerzigJonathan Berant
2018
EMNLP

Building a semantic parser quickly in a new domain is a fundamental challenge for conversational interfaces, as current semantic parsers require expensive supervision and lack the ability to… 

Dissecting Contextual Word Embeddings: Architecture and Representation

Matthew PetersMark NeumannWen-tau Yihand Luke Zettlemoyer
2018
EMNLP

Contextual word representations derived from pre-trained bidirectional language models (biLMs) have recently been shown to provide significant improvements to the state of the art for a wide range… 

Neural Cross-Lingual Named Entity Recognition with Minimal Resources

Jiateng XieZhilin YangGraham NeubigJaime Carbonell
2018
EMNLP

For languages with no annotated resources, unsupervised transfer of natural language processing models such as named-entity recognition (NER) from resource-rich languages would be an appealing… 

Neural Metaphor Detection in Context

Ge GaoEunsol ChoiYejin Choi and Luke Zettlemoyer
2018
EMNLP

We present end-to-end neural models for detecting metaphorical word use in context. We show that relatively standard BiLSTM models which operate on complete sentences work well in this setting, in… 

Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations

Dipendra MisraMing-Wei ChangXiaodong HeWen-tau Yih
2018
EMNLP

Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model… 

Rational Recurrences

Hao PengRoy SchwartzSam Thomsonand Noah A. Smith
2018
EMNLP

Despite the tremendous empirical success of neural models in natural language processing, many of them lack the strong intuitions that accompany classical machine learning approaches. Recently,… 

Reasoning about Actions and State Changes by Injecting Commonsense Knowledge

Niket TandonBhavana Dalvi MishraJoel GrusPeter Clark
2018
EMNLP

Comprehending procedural text, e.g., a paragraph describing photosynthesis, requires modeling actions and the state changes they produce, so that questions about entities at different timepoints can… 

SimpleQuestions Nearly Solved: A New Upperbound and Baseline Approach

Michael PetrochukLuke Zettlemoyer
2018
EMNLP

The SimpleQuestions dataset is one of the most commonly used benchmarks for studying single-relation factoid questions. In this paper, we present new evidence that this benchmark can be nearly… 

Spot the Odd Man Out: Exploring the Associative Power of Lexical Resources

Gabriel StanovskyMark Hopkins
2018
EMNLP

We propose Odd-Man-Out, a novel task which aims to test different properties of word representations. An Odd-Man-Out puzzle is composed of 5 (or more) words, and requires the system to choose the…