Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
Value-based Search in Execution Space for Mapping Instructions to Programs
Training models to map natural language instructions to programs given target world supervision only requires searching for good programs at training time. Search is commonly done using beam search…
CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge
When answering a question, people often draw upon their rich world knowledge in addition to the particular context. Recent work has focused primarily on answering questions given some relevant…
DiscoFuse: A Large-Scale Dataset for Discourse-based Sentence Fusion
Sentence fusion is the task of joining several independent sentences into a single coherent text. Current datasets for sentence fusion are small and insufficient for training modern neural models.…
Evaluating Text GANs as Language Models
Generative Adversarial Networks (GANs) are a promising approach for text generation that, unlike traditional language models (LM), does not suffer from the problem of “exposure bias”. However, A…
Linguistic Knowledge and Transferability of Contextual Representations
Contextual word representations derived from large-scale neural language models are successful across a diverse set of NLP tasks, suggesting that they encode useful and transferable features of…
Polyglot Contextual Representations Improve Crosslingual Transfer
We introduce a method to produce multilingual contextual word representations by training a single language model on text from multiple languages. Our method combines the advantages of contextual…
Step-by-Step: Separating Planning from Realization in Neural Data-to-Text Generation
Data-to-text generation can be conceptually divided into two parts: ordering and structuring the information (planning), and generating fluent language describing the information (realization).…
Aligning Vector-spaces with Noisy Supervised Lexicons
The problem of learning to translate between two vector spaces given a set of aligned points arises in several application areas of NLP. Current solutions assume that the lexicon which defines the…
White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks
We show that a neural network can learn to imitate the optimization process performed by white-box attack in a much more efficient manner. We train a black-box attack through this imitation process…
Repurposing Entailment for Multi-Hop Question Answering Tasks
Question Answering (QA) naturally reduces to an entailment problem, namely, verifying whether some text entails the answer to a question. However, for multi-hop QA tasks, which require reasoning…