Research - Papers
Explore a selection of our published work on a variety of key research challenges in AI.
Think about it! Improving defeasible reasoning by first modeling the question scenario
Defeasible reasoning is the mode of reasoning where conclusions can be overturned by taking into account new evidence. Existing cognitive science literature on defeasible reasoning suggests that a…
Ethical-Advice Taker: Do Language Models Understand Natural Language Interventions?
Is it possible to use natural language to intervene in a model’s behavior and alter its prediction in a desired way? We investigate the effectiveness of natural language interventions for…
Investigating Transfer Learning in Multilingual Pre-trained Language Models through Chinese Natural Language Inference
Multilingual transformers (XLM, mT5) have been shown to have remarkable transfer skills in zero-shot settings. Most transfer studies, however, rely on automatically translated resources (XNLI,…
ReadOnce Transformers: Reusable Representations of Text for Transformers
While large-scale language models are extremely effective when directly fine-tuned on many end-tasks, such models learn to extract information and solve the task simultaneously from end-task…
General-Purpose Question-Answering with Macaw
Despite the successes of pretrained language models, there are still few high-quality, general-purpose QA systems that are freely available. In response, we present MACAW, a versatile, generative…
Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies
A key limitation in current datasets for multi-hop reasoning is that the required steps for answering the question are mentioned in it explicitly. In this work, we introduce STRATEGYQA, a question…
ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language
Transformers have been shown to emulate logical deduction over natural language theories (logical rules expressed in natural language), reliably assigning true/false labels to candidate…
ParsiNLU: A Suite of Language Understanding Challenges for Persian
Despite the progress made in recent years in addressing natural language understanding (NLU) challenges, the majority of this progress remains to be concentrated on resource-rich languages like…
Critical Thinking for Language Models
This paper takes a first step towards a critical thinking curriculum for neural auto-regressive language models. We introduce a synthetic text corpus of deductively valid arguments, and use this…
Temporal Reasoning on Implicit Events from Distant Supervision
Existing works on temporal reasoning among events described in text focus on modeling relationships between explicitly mentioned events and do not handle event end time effectively. However, human…