Skip to main content ->
Ai2

Research - Papers

Explore a selection of our published work on a variety of key research challenges in AI.

Filter papers

A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications

Dongyeop KangWaleed AmmarBhavana Dalvi MishraRoy Schwartz
2018
NAACL-HLT

Peer reviewing is a central component in the scientific publishing process. We present the first public dataset of scientific peer reviews available for research pur- poses (PeerRead v1), providing… 

Annotation Artifacts in Natural Language Inference Data

Suchin GururanganSwabha SwayamdiptaOmer LevySam Bowman and Noah A. Smith
2018
NAACL

Large-scale datasets for natural language inference are created by presenting crowd workers with a sentence (premise), and asking them to generate three new sentences (hypotheses) that it entails,… 

Deep Contextualized Word Representations

Matthew E. PetersMark NeumannMohit IyyerLuke Zettlemoyer
2018
NAACL

We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across… 

SoPa: Bridging CNNs, RNNs, and Weighted Finite-State Machines

Roy SchwartzSam Thomson and Noah A. Smith
2018
ACL

Recurrent and convolutional neural networks comprise two distinct families of models that have proven to be useful for encoding natural language utterances. In this paper we present SoPa, a new… 

Dynamic Entity Representations in Neural Language Models

Yangfeng JiChenhao TanSebastian MartschatNoah A. Smith
2017
EMNLP

Understanding a long document requires tracking how entities are introduced and evolve over time. We present a new type of language model, EntityNLM, that can explicitly model entities, dynamically… 

Learning a Neural Semantic Parser from User Feedback

Srinivasan IyerIoannis KonstasAlvin Cheungand Luke Zettlemoyer
2017
ACL

We present an approach to rapidly and easily build natural language interfaces to databases for new domains, whose performance improves over time based on user feedback, and requires minimal… 

Semi-supervised sequence tagging with bidirectional language models

Matthew E. PetersWaleed AmmarChandra Bhagavatulaand Russell Power
2017
ACL

Pre-trained word embeddings learned from unlabeled text have become a standard component of neural network architectures for NLP tasks. However, in most cases, the recurrent network that operates… 

Deep Semantic Role Labeling: What Works and What's Next

Luheng HeKenton LeeMike LewisLuke S. Zettlemoyer
2017
ACL

We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use… 

End-to-end Neural Coreference Resolution

Kenton LeeLuheng HeMike Lewisand Luke Zettlemoyer
2017
EMNLP

We introduce the first end-to-end coreference resolution model and show that it significantly outperforms all previous work without using a syntactic parser or handengineered mention detector. The… 

Neural Semantic Parsing with Type Constraints for Semi-Structured Tables

Jayant KrishnamurthyPradeep Dasigiand Matt Gardner
2017
EMNLP

We present a new semantic parsing model for answering compositional questions on semi-structured Wikipedia tables. Our parser is an encoder-decoder neural network with two key technical innovations:…