All Years
Viewing 11-20 of 34 datasets
  • ARC Question Classification Dataset

    7,787 multiple choice questions annotated with question classification labelsAristo • 2019A dataset of detailed problem domain classification labels for each of the 7,787 multiple-choice science questions found in the AI2 Reasoning Challenge (ARC) dataset, to enable targeted pairing of questions with problem-specific solvers. Also included is a taxonomy of 462 detailed problem domains for grade-school science, organized into 6 levels of specificity.
  • What-If Question Answering

    Large-scale dataset of 39705 "What if..." questions over procedural textAristo • 2019The WIQA dataset V1 has 39705 questions containing a perturbation and a possible effect in the context of a paragraph. The dataset is split into 29808 train questions, 6894 dev questions and 3003 test questions.
  • QuaRel Dataset

    2771 story questions about qualitative relationshipsAristo • 2018QuaRel is a crowdsourced dataset of 2771 multiple-choice story questions, including their logical forms.
  • OpenBookQA Dataset

    5,957 multiple-choice questions probing a book of 1,326 science factsAristo • 2018OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic (with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In particular, it contains questions that require multi-step reasoning, use of additional common and commonsense knowledge, and rich text comprehension.
  • ProPara Dataset

    488 richly annotated paragraphs about processes (containing 3,300 sentences)Aristo • 2018The ProPara dataset is designed to train and test comprehension of simple paragraphs describing processes (e.g., photosynthesis), designed for the task of predicting, tracking, and answering questions about how entities change during the process.
  • PeerRead

    Over 14K paper drafts and over 10K textual peer reviewsAristo • 2018PeerRead is a dataset of scientific peer reviews available to help researchers study this important artifact.
  • AI2 Reasoning Challenge (ARC) 2018

    7,787 multiple choice science questions and associated corporaAristo • 2018A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also including a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community.
  • ExplanationBank

    Explanation graphs for 1,680 questionsAristo • 2018A collection of resources for studying explanation-centered inference, including explanation graphs for 1,680 questions, with 4,950 tablestore rows, and other analyses of the knowledge required to answer elementary and middle-school science questions.
  • SciTail Dataset

    27,026 statementsAristo • 2017The SciTail dataset is an entailment dataset created from multiple-choice science exams and web sentences. Each question and the correct answer choice are converted into an assertive statement to form the hypothesis.
  • SciQ Dataset

    13,679 science questions with supporting sentencesAristo • 2017The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided.
All Years