Papers
See AI2's Award Winning Papers
Learn more about AI2's Lasting Impact Award
Viewing 111-120 of 214 papers
Transformers as Soft Reasoners over Language
Peter Clark, Oyvind Tafjord, Kyle RichardsonIJCAI • 2020 AI has long pursued the goal of having systems reason over explicitly provided knowledge, but building suitable representations has proved challenging. Here we explore whether transformers can similarly learn to reason (or emulate reasoning), but using rules…TransOMCS: From Linguistic Graphs to Commonsense Knowledge
Hongming Zhang, Daniel Khashabi, Yangqiu Song, Dan RothIJCAI • 2020 Commonsense knowledge acquisition is a key problem for artificial intelligence. Conventional methods of acquiring commonsense knowledge generally require laborious and costly human annotations, which are not feasible on a large scale. In this paper, we…Not All Claims are Created Equal: Choosing the Right Approach to Assess Your Hypotheses
Erfan Sadeqi Azer, Daniel Khashabi, Ashish Sabharwal, Dan RothACL • 2020 Empirical research in Natural Language Processing (NLP) has adopted a narrow set of principles for assessing hypotheses, relying mainly on p-value computation, which suffers from several known issues. While alternative proposals have been well-debated and…Temporal Common Sense Acquisition with Minimal Supervision
Ben Zhou, Qiang Ning, Daniel Khashabi, Dan RothACL • 2020 Temporal common sense (e.g., duration and frequency of events) is crucial for understanding natural language. However, its acquisition is challenging, partly because such information is often not expressed explicitly in text, and human annotation on such…Procedural Reading Comprehension with Attribute-Aware Context Flow
Aida Amini, Antoine Bosselut, Bhavana Dalvi Mishra, Yejin Choi, Hannaneh HajishirziAKBC • 2020 Procedural texts often describe processes (e.g., photosynthesis and cooking) that happen over entities (e.g., light, food). In this paper, we introduce an algorithm for procedural reading comprehension by translating the text into a general formalism that…Do Dogs have Whiskers? A New Knowledge Base of hasPart Relations
Sumithra Bhakthavatsalam, Kyle Richardson, Niket Tandon, Peter ClarkarXiv • 2020 We present a new knowledge-base (KB) of hasPart relationships, extracted from a large corpus of generic statements. Complementary to other resources available, it is the first which is all three of: accurate (90% precision), salient (covers relationships a…GenericsKB: A Knowledge Base of Generic Statements
Sumithra Bhakthavatsalam, Chloe Anastasiades, Peter ClarkarXiv • 2020 We present a new resource for the NLP community, namely a large (3.5M+ sentence) knowledge base of *generic statements*, e.g., "Trees remove carbon dioxide from the atmosphere", collected from multiple corpora. This is the first large resource to contain…Probing Natural Language Inference Models through Semantic Fragments
Kyle Richardson, Hai Na Hu, Lawrence S. Moss, Ashish SabharwalAAAI • 2020 Do state-of-the-art models for language understanding already have, or can they easily learn, abilities such as boolean coordination, quantification, conditionals, comparatives, and monotonicity reasoning (i.e., reasoning about word substitutions in…MonaLog: a Lightweight System for Natural Language Inference Based on Monotonicity
Hai Hu, Qi Chen, Kyle Richardson, Atreyee Mukherjee, Lawrence S. Moss, Sandra Kübler SCIL • 2020 We present a new logic-based inference engine for natural language inference (NLI) called MonaLog, which is based on natural logic and the monotonicity calculus. In contrast to existing logic-based approaches, our system is intentionally designed to be as…What's Missing: A Knowledge Gap Guided Approach for Multi-hop Question Answering
Tushar Khot, Ashish Sabharwal, Peter ClarkEMNLP • 2019 Multi-hop textual question answering requires combining information from multiple sentences. We focus on a natural setting where, unlike typical reading comprehension, only partial information is provided with each question. The model must retrieve and use…