Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
Does my multimodal model learn cross-modal interactions? It’s harder to tell than you might think!
Modeling expressive cross-modal interactions seems crucial in multimodal tasks, such as visual question answering. However, sometimes high-performing black-box algorithms turn out to be mostly…
Do Language Embeddings Capture Scales?
Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense, and factual knowledge. One form of knowledge that has not been studied yet in this context is…
Domain-Specific Lexical Grounding in Noisy Visual-Textual Documents
Images can give us insights into the contextual meanings of words, but current imagetext grounding approaches require detailed annotations. Such granular annotation is rare, expensive, and…
Easy, Reproducible and Quality-Controlled Data Collection with Crowdaq
High-quality and large-scale data are key to success for AI systems. However, large-scale data annotation efforts are often confronted with a set of common challenges: (1) designing a user-friendly…
Fact or Fiction: Verifying Scientific Claims
We introduce the task of scientific fact-checking. Given a corpus of scientific articles and a claim about a scientific finding, a fact-checking model must identify abstracts that support or refute…
Grounded Compositional Outputs for Adaptive Language Modeling
Language models have emerged as a central component across NLP, and a great deal of progress depends on the ability to cheaply adapt them (e.g., through finetuning) to new domains and tasks. A…
IIRC: A Dataset of Incomplete Information Reading Comprehension Questions
Humans often have to read multiple documents to address their information needs. However, most existing reading comprehension (RC) tasks only focus on questions for which the contexts provide all…
Improving Compositional Generalization in Semantic Parsing
Generalization of models to out-of-distribution (OOD) data has captured tremendous attention recently. Specifically, compositional generalization, i.e., whether a model generalizes to new structures…
Is Multihop QA in DiRe Condition? Measuring and Reducing Disconnected Reasoning
Has there been real progress in multi-hop question-answering? Models often exploit dataset artifacts to produce correct answers, without connecting information across multiple supporting facts. This…
Learning from Task Descriptions
Typically, machine learning systems solve new tasks by training on thousands of examples. In contrast, humans can solve new tasks by reading some instructions, with perhaps an example or two. To…