Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations
Despite their impressive capabilities, large pretrained language models (LMs) struggle with consistent reasoning; recently, prompting LMs to generate explanations that self-guide the inference has…
Penguins Don't Fly: Reasoning about Generics through Instantiations and Exceptions
Generics express generalizations about the world (e.g., “birds can fly"). However, they are not universally true – while sparrows and penguins are both birds, only sparrows can fly and penguins…
Cross-Task Generalization via Natural Language Crowdsourcing Instructions
Can we enable NLP models to appropriately respond to instructional prompts and consequently generalize to new tasks? To study this question, we leverage the existing NLP datasets and the…
Reframing Instructional Prompts to GPTk's Language
How can model designers turn task instructions into effective prompts for language models? Backed by extensive empirical analysis on GPT3, we observe important features for successful instructional…
Large Scale Substitution-based Word Sense Induction
We present a word-sense induction method based on pre-trained masked language models (MLMs), which can cheaply scale to large vocabularies and large corpora. The result is a corpus which is…
Hey AI, Can You Solve Complex Tasks by Talking to Agents?
Humans often solve complex problems by interacting (in natural language) with existing agents, such as AI assistants, that can solve simpler sub-tasks. These agents themselves can be powerful…
Generating Data to Mitigate Spurious Correlations in Natural Language Inference Datasets
Natural language processing models often exploit spurious correlations between task-independent features and labels in datasets to perform well only within the distributions they are trained on,…
Generated Knowledge Prompting for Commonsense Reasoning
Despite their ability to capture large amount of knowledge during pretraining, large-scale language models often benefit from incorporating external knowledge bases, especially on commonsense…
ABC: Attention with Bounded-memory Control
Transformer architectures have achieved state-of-the-art results on a variety of sequence modeling tasks. However, their attention mechanism comes with a quadratic complexity in sequence lengths,…
NumGLUE: A Suite of Fundamental yet Challenging Mathematical Reasoning Tasks
Given the ubiquitous nature of numbers in text, reasoning with numbers to perform simple calculations is an important skill of AI systems. While many datasets and models have been developed to this…