Allen Institute for AI

Papers

Viewing 1-10 of 118 papers
  • What Does My QA Model Know? Devising Controlled Probes using Expert Knowledge

    Kyle Richardson, Ashish SabharwalTACL2020Open-domain question answering (QA) is known to involve several underlying knowledge and reasoning challenges, but are models actually learning such knowledge when trained on benchmark tasks? To investigate this, we introduce several new challenge tasks that probe whether state-of-theart QA models… more
  • Text Modular Networks: Learning to Decompose Tasks in the Language of Existing Models

    Tushar Khot, Daniel Khashabi, Kyle Richardson, Peter Clark, Ashish SabharwalarXiv2020A common approach to solve complex tasks is by breaking them down into simple sub-problems that can then be solved by simpler modules. However, these approaches often need to be designed and trained specifically for each complex task. We propose a general approach, Text Modular Networks(TMNs… more
  • Transformers as Soft Reasoners over Language

    Peter Clark, Oyvind Tafjord, Kyle RichardsonIJCAI2020AI has long pursued the goal of having systems reason over explicitly provided knowledge, but building suitable representations has proved challenging. Here we explore whether transformers can similarly learn to reason (or emulate reasoning), but using rules expressed in language, thus bypassing a… more
  • Multi-class Hierarchical Question Classification for Multiple Choice Science Exams

    Dongfang Xu, Peter Jansen, Jaycie Martin, Zhengnan Xie, Vikas Yadav, Harish Tayyar Madabushi, Oyvind Tafjord, Peter ClarkIJCAI2020Prior work has demonstrated that question classification (QC), recognizing the problem domain of a question, can help answer it more accurately. However, developing strong QC algorithms has been hindered by the limited size and complexity of annotated data available. To address this, we present the… more
  • TransOMCS: From Linguistic Graphs to Commonsense Knowledge

    Hongming Zhang, Daniel Khashabi, Yangqiu Song, Dan RothIJCAI2020Commonsense knowledge acquisition is a key problem for artificial intelligence. Conventional methods of acquiring commonsense knowledge generally require laborious and costly human annotations, which are not feasible on a large scale. In this paper, we explore a practical way of mining commonsense… more
  • Not All Claims are Created Equal: Choosing the Right Approach to Assess Your Hypotheses

    Erfan Sadeqi Azer, Daniel Khashabi, Ashish Sabharwal, Dan RothACL2020Empirical research in Natural Language Processing (NLP) has adopted a narrow set of principles for assessing hypotheses, relying mainly on p-value computation, which suffers from several known issues. While alternative proposals have been well-debated and adopted in other fields, they remain rarely… more
  • Temporal Common Sense Acquisition with Minimal Supervision

    Ben Zhou, Qiang Ning, Daniel Khashabi, Dan RothACL 2020Temporal common sense (e.g., duration and frequency of events) is crucial for understanding natural language. However, its acquisition is challenging, partly because such information is often not expressed explicitly in text, and human annotation on such concepts is costly. This work proposes a… more
  • Belief Propagation Neural Networks

    J. Kuck, Shuvam Chakraborty, Hao Tang, R. Luo, Jiaming Song, A. Sabharwal, S. ErmonarXiv2020Learned neural solvers have successfully been used to solve combinatorial optimization and decision problems. More general counting variants of these problems, however, are still largely solved with hand-crafted solvers. To bridge this gap, we introduce belief propagation neural networks (BPNNs), a… more
  • Do Dogs have Whiskers? A New Knowledge Base of hasPart Relations

    Sumithra Bhakthavatsalam, Kyle Richardson, Niket Tandon, Peter ClarkarXiv2020We present a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a person may mention), and has high coverage… more
  • Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge

    Alon Talmor, Oyvind Tafjord, Peter Clark, Yoav Goldberg, Jonathan BerantarXiv2020To what extent can a neural network systematically reason over symbolic facts? Evidence suggests that large pre-trained language models (LMs) acquire some reasoning capacity, but this ability is difficult to control. Recently, it has been shown that Transformer-based models succeed in consistent… more