Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

Deep Contextualized Word Representations

Matthew E. PetersMark NeumannMohit IyyerLuke Zettlemoyer
2018
NAACL

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across… 

SoPa: Bridging CNNs, RNNs, and Weighted Finite-State Machines

Roy SchwartzSam Thomson and Noah A. Smith
2018
ACL

Recurrent and convolutional neural networks comprise two distinct families of models that have proven to be useful for encoding natural language utterances. In this paper we present SoPa, a new… 

Dynamic Entity Representations in Neural Language Models

Yangfeng JiChenhao TanSebastian MartschatNoah A. Smith
2017
EMNLP

Understanding a long document requires tracking how entities are introduced and evolve over time. We present a new type of language model, EntityNLM, that can explicitly model entities, dynamically… 

Learning a Neural Semantic Parser from User Feedback

Srinivasan IyerIoannis KonstasAlvin Cheungand Luke Zettlemoyer
2017
ACL

We present an approach to rapidly and easily build natural language interfaces to databases for new domains, whose performance improves over time based on user feedback, and requires minimal… 

Semi-supervised sequence tagging with bidirectional language models

Matthew E. PetersWaleed AmmarChandra Bhagavatulaand Russell Power
2017
ACL

Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks. However, in most cases, the recurrent network that operates… 

Deep Semantic Role Labeling: What Works and What's Next

Luheng HeKenton LeeMike LewisLuke S. Zettlemoyer
2017
ACL

We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use… 

End-to-end Neural Coreference Resolution

Kenton LeeLuheng HeMike Lewisand Luke Zettlemoyer
2017
EMNLP

We introduce the first end-to-end coreference resolution model and show that it significantly outperforms all previous work without using a syntactic parser or handengineered mention detector. The… 

Neural Semantic Parsing with Type Constraints for Semi-Structured Tables

Jayant KrishnamurthyPradeep Dasigiand Matt Gardner
2017
EMNLP

We present a new semantic parsing model for answering compositional questions on semi-structured Wikipedia tables. Our parser is an encoder-decoder neural network with two key technical innovations:… 

Parsing Algebraic Word Problems into Equations

Rik Koncel-KedziorskiHannaneh HajishirziAshish Sabharwaland Siena Dumas Ang
2015
TACL

This paper formalizes the problem of solving multi-sentence algebraic word problems as that of generating and scoring equation trees. We use integer linear programming to generate equation trees and… 

Learning to Solve Arithmetic Word Problems with Verb Categorization

Mohammad Javad HosseiniHannaneh HajishirziOren Etzioniand Nate Kushman
2014
EMNLP

This paper presents a novel approach to learning to solve simple arithmetic word problems. Our system, ARIS, analyzes each of the sentences in the problem statement to identify the relevant… 

Previous322-331