Oyvind Tafjord

Oyvind Tafjord

A photo of Oyvind Tafjord

Oyvind Tafjord

Research

Oyvind Tafjord is a Research Scientist at AI2. Prior to AI2 he was the Chief Architect of Wolfram|Alpha, playing an integral role in building up that system, from parsing algorithms and knowledge representation to overall systems architecture. He received a Ph.D. in theoretical physics from Princeton in 1999 and a master degree from the Norwegian Institute of Technology in 1994.

Semantic ScholarGoogle ScholarContact

Papers

  • Semantic Parsing to Probabilistic Programs for Situated Question Answering

    Jayant Krishnamurthy, Oyvind Tafjord, and Aniruddha KembhaviEMNLP 2016

    Situated question answering is the problem of answering questions about an environment such as an image or diagram. This problem requires jointly interpreting a question and an environment using background knowledge to select the correct answer. We present Parsing to Probabilistic Programs (P3), a novel situated question answering model that can use background knowledge and global features of the question/environment interpretation while retaining efficient approximate inference. Our key insight is to treat semantic parses as probabilistic programs that execute nondeterministically and whose possible executions represent environmental uncertainty. We evaluate our approach on a new, publicly-released data set of 5000 science diagram questions, outperforming several competitive classical and neural baselines.

  • Moving Beyond the Turing Test with the Allen AI Science Challenge

    Carissa Schoenick, Peter Clark, Oyvind Tafjord, Peter Turney, and Oren EtzioniCACM 2016

    Given recent successes in AI (e.g., AlphaGo’s victory against Lee Sedol in the game of GO), it’s become increasingly important to assess: how close are AI systems to human-level intelligence? This paper describes the Allen AI Science Challenge—an approach towards that goal which led to a unique Kaggle Competition, its results, the lessons learned, and our next steps.

  • Combining Retrieval, Statistics, and Inference to Answer Elementary Science Questions

    Peter Clark, Oren Etzioni, Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, and Peter TurneyAAAI2016

    What capabilities are required for an AI system to pass standard 4th Grade Science Tests? Previous work has examined the use of Markov Logic Networks (MLNs) to represent the requisite background knowledge and interpret test questions, but did not improve upon an information retrieval (IR) baseline. In this paper, we describe an alternative approach that operates at three levels of representation and reasoning: information retrieval, corpus statistics, and simple inference over a semi-automatically constructed knowledge base, to achieve substantially improved results. We evaluate the methods on six years of unseen, unedited exam questions from the NY Regents Science Exam (using only non-diagram, multiple choice questions), and show that our overall system’s score is 71.3%, an improvement of 23.8% (absolute) over the MLN-based method described in previous work. We conclude with a detailed analysis, illustrating the complementary strengths of each method in the ensemble. Our datasets are being released to enable further research.

  • Automatic Construction of Inference-Supporting Knowledge Bases

    Peter Clark, Niranjan Balasubramanian, Sumithra Bhakthavatsalam, Kevin Humphreys, Jesse Kinkead, Ashish Sabharwal, and Oyvind TafjordAKBC | 2014 | Best Paper Award

    While there has been tremendous progress in automatic database population in recent years, most of human knowledge does not naturally fit into a database form. For example, knowledge that "metal objects can conduct electricity" or "animals grow fur to help them stay warm" requires a substantially different approach to both acquisition and representation. This kind of knowledge is important because it can support inference e.g., (with some associated confidence) if an object is made of metal then it can conduct electricity; if an animal grows fur then it will stay warm. If we want our AI systems to understand and reason about the world, then acquisition of this kind of inferential knowledge is essential. In this paper, we describe our work on automatically constructing an inferential knowledge base, and applying it to a question-answering task. Rather than trying to induce rules from examples, or enter them by hand, our goal is to acquire much of this knowledge directly from text. Our premise is that much inferential knowledge is written down explicitly, in particular in textbooks, and can be extracted with reasonable reliability. We describe several challenges that this approach poses, and innovative, partial solutions that we have developed. Finally we speculate on the longer-term evolution of this work.