Allen Institute for AI

Datasets

Viewing 21-30 of 50 datasets
  • OpenBookQA Dataset

    5,957 multiple-choice questions probing a book of 1,326 science factsAristo • 2018OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic (with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In particular, it contains questions that require multi-step reasoning, use of additional common and commonsense knowledge, and rich text comprehension.
  • Open Research Corpus

    Over 39 million published research papers in Computer Science, Neuroscience, and BiomedicalSemantic Scholar • 2018Over 39 million published research papers in Computer Science, Neuroscience, and Biomedical. This is a subset of the full Semantic Scholar corpus which represents papers crawled from the Web and subjected to a number of filters.
  • ProPara Dataset

    488 richly annotated paragraphs about processes (containing 3,300 sentences)Aristo • 2018The ProPara dataset is designed to train and test comprehension of simple paragraphs describing processes (e.g., photosynthesis), designed for the task of predicting, tracking, and answering questions about how entities change during the process.
  • PeerRead

    Over 14K paper drafts and over 10K textual peer reviewsAristo • 2018PeerRead is a dataset of scientific peer reviews available to help researchers study this important artifact.
  • ComplexWebQuestions

    34,689 complex questions and their answers, web snippets, and SPARQL queryAI2 Israel, Question Understanding • 2018ComplexWebQuestions is a dataset for answering complex questions that require reasoning over multiple web snippets. It contains a large set of complex questions in natural language, and can be used in multiple ways: 1) By interacting with a search engine, which is the focus of our paper (Talmor and Berant, 2018); 2) As a reading comprehension task: we release 12,725,989 web snippets that are relevant for the questions, and were collected during the development of our model; 3) As a semantic parsing task: each question is paired with a SPARQL query that can be executed against Freebase to retrieve the answer.
  • AI2 Reasoning Challenge (ARC) 2018

    7,787 multiple choice science questions and associated corporaAristo • 2018A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also including a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community.
  • ExplanationBank

    Explanation graphs for 1,680 questionsAristo • 2018A collection of resources for studying explanation-centered inference, including explanation graphs for 1,680 questions, with 4,950 tablestore rows, and other analyses of the knowledge required to answer elementary and middle-school science questions.
  • SciTail Dataset

    27,026 statementsAristo • 2017The SciTail dataset is an entailment dataset created from multiple-choice science exams and web sentences. Each question and the correct answer choice are converted into an assertive statement to form the hypothesis.
  • SciQ Dataset

    13,679 science questions with supporting sentencesAristo • 2017The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided.
  • TupleInf Open IE Dataset

    156K sentences for 4th grade questions, 107K sentences for 8th grade questions, and derived tuplesAristo • 2017The TupleInf Open IE dataset contains Open IE tuples extracted from 263K sentences that were used by the solver in the paper "Answering Complex Questions Using Open Information Extraction".