Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
Bidimensional Leaderboards: Generate and Evaluate Language Hand in Hand
Natural language processing researchers have identified limitations of evaluation methodology for generation tasks, with new questions raised about the validity of automatic metrics and of…
Symbolic Knowledge Distillation: from General Language Models to Commonsense Models
The common practice for training commonsense models has gone from–human–to– corpus–to–machine: humans author commonsense knowledge graphs in order to train commonsense models. In this work, we…
A Dataset for N-ary Relation Extraction of Drug Combinations
Combination therapies have become the standard of care for diseases such as cancer, tuberculosis, malaria and HIV. However, the combinatorial set of available multi-drug treatments creates a…
Connecting the Dots between Audio and Text without Parallel Data through Visual Knowledge Transfer
Machines that can represent and describe environmental soundscapes have practical poten-tial, e.g., for audio tagging and captioning. Pre-vailing learning paradigms of audio-text connections have…
Reframing Human-AI Collaboration for Generating Free-Text Explanations
Large language models are increasingly capa-ble of generating fluent-appearing text with relatively little task-specific supervision. But can these models accurately explain classification decisions?…
Weakly Supervised Text-to-SQL Parsing through Question Decomposition
Text-to-SQL parsers are crucial in enabling non-experts to effortlessly query relational data. Training such parsers, by contrast, generally requires expertise in annotating natural language (NL)…
Annotators with Attitudes: How Annotator Beliefs And Identities Bias Toxic Language Detection
Warning : this paper discusses and contains content that is offensive or upsetting. The perceived toxicity of language can vary based on someone’s identity and beliefs, but this variation is often…
DEMix Layers: Disentangling Domains for Modular Language Modeling
We introduce a new domain expert mixture (DEMIX) layer that enables conditioning a language model (LM) on the domain of the input text. A DEMIX layer is a collection of expert feedforward networks,…
Long Context Question Answering via Supervised Contrastive Learning
Long-context question answering (QA) tasks require reasoning over a long document or multiple documents. Addressing these tasks often benefits from identifying a set of evidence spans (e.g.,…
Literature-Augmented Clinical Outcome Prediction
We present BEEP (Biomedical Evidence-Enhanced Predictions), a novel approach for clinical outcome prediction that retrieves patient-specific medical literature and incorporates it into predictive…