Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
SciCo: Hierarchical Cross-Document Coreference for Scientific Concepts
Determining coreference of concept mentions across multiple documents is fundamental for natural language understanding. Work on cross-document coreference resolution (CDCR) typically considers…
Scientific Language Models for Biomedical Knowledge Base Completion: An Empirical Study
Biomedical knowledge graphs (KGs) hold rich information on entities such as diseases, drugs, and genes. Predicting missing links in these graphs can boost many important applications, such as drug…
ReadOnce Transformers: Reusable Representations of Text for Transformers
While large-scale language models are extremely effective when directly fine-tuned on many end-tasks, such models learn to extract information and solve the task simultaneously from end-task…
Investigating Transfer Learning in Multilingual Pre-trained Language Models through Chinese Natural Language Inference
Multilingual transformers (XLM, mT5) have been shown to have remarkable transfer skills in zero-shot settings. Most transfer studies, however, rely on automatically translated resources (XNLI,…
Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?
Is it possible to use natural language to intervene in a model’s behavior and alter its prediction in a desired way? We investigate the effectiveness of natural language interventions for…
Expected Validation Performance and Estimation of a Random Variable's Maximum
Research in NLP is often supported by experimental results, and improved reporting of such results can lead to better understanding and more reproducible science. In this paper we analyze three…
Competency Problems: On Finding and Removing Artifacts in Language Data
Much recent work in NLP has documented dataset artifacts, bias, and spurious correlations between input features and output labels. However, how to tell which features have “spurious” instead of…
Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine Translation
State-of-the-art neural machine translation models generate outputs autoregressively, where every step conditions on the previously generated tokens. This sequential nature causes inherent decoding…
Random Feature Attention
Transformers are state-of-the-art models for a variety of sequence modeling tasks. At their core is an attention function which models pairwise interactions between the inputs at every timestep.…
Symbolic Brittleness in Sequence Models: on Systematic Generalization in Symbolic Mathematics
Neural sequence models trained with maximum likelihood estimation have led to breakthroughs in many tasks, where success is defined by the gap between training and test performance. However, their…