Datasets

All Projects
All Years
Viewing 31-40 of 63 datasets
  • Discrete Reasoning Over the content of Paragraphs (DROP)

    The DROP dataset contains 96k Question and Answering pairs (QAs) over 6.7K paragraphs, split between train (77k QAs), development (9.5k QAs) and a hidden test partition (9.5k QAs).AllenNLP, AI2 Irvine • 2019DROP is a QA dataset that tests the comprehensive understanding of paragraphs. In this crowdsourced, adversarially-created, 96k question-answering benchmark, a system must resolve multiple references in a question, map them onto a paragraph, and perform discrete operations over them (such as addition, counting, or sorting).
  • SciCite: Citation intenent classification dataset

    A large dataset of citation intent classification based on citation textSemantic Scholar • 2019Citations play a unique role in scientific discourse and are crucial for understanding and analyzing scientific work. However not all citations are equal. Some citations refer to use of a method from another work, some discuss results or findings of other work, while others are merely background or acknowledgement citations. SciCite is a dataset of 11K manually annotated citation intents based on citation context in the computer science and biomedical domains.
  • QuaRel Dataset

    2771 story questions about qualitative relationshipsAristo • 2018QuaRel is a crowdsourced dataset of 2771 multiple-choice story questions, including their logical forms.
  • OpenBookQA Dataset

    5,957 multiple-choice questions probing a book of 1,326 science factsAristo • 2018OpenBookQA aims to promote research in advanced question-answering, probing a deeper understanding of both the topic (with salient facts summarized as an open book, also provided with the dataset) and the language it is expressed in. In particular, it contains questions that require multi-step reasoning, use of additional common and commonsense knowledge, and rich text comprehension.
  • Open Research Corpus

    Over 39 million published research papers in Computer Science, Neuroscience, and BiomedicalSemantic Scholar • 2018Over 39 million published research papers in Computer Science, Neuroscience, and Biomedical. This is a subset of the full Semantic Scholar corpus which represents papers crawled from the Web and subjected to a number of filters.
  • ProPara Dataset

    488 richly annotated paragraphs about processes (containing 3,300 sentences)Aristo • 2018The ProPara dataset is designed to train and test comprehension of simple paragraphs describing processes (e.g., photosynthesis), designed for the task of predicting, tracking, and answering questions about how entities change during the process.
  • PeerRead

    Over 14K paper drafts and over 10K textual peer reviewsAristo • 2018PeerRead is a dataset of scientific peer reviews available to help researchers study this important artifact.
  • ComplexWebQuestions

    34,689 complex questions and their answers, web snippets, and SPARQL queryAI2 Israel, Question Understanding • 2018ComplexWebQuestions is a dataset for answering complex questions that require reasoning over multiple web snippets. It contains a large set of complex questions in natural language, and can be used in multiple ways: 1) By interacting with a search engine, which is the focus of our paper (Talmor and Berant, 2018); 2) As a reading comprehension task: we release 12,725,989 web snippets that are relevant for the questions, and were collected during the development of our model; 3) As a semantic parsing task: each question is paired with a SPARQL query that can be executed against Freebase to retrieve the answer.
  • AI2 Reasoning Challenge (ARC) 2018

    7,787 multiple choice science questions and associated corporaAristo • 2018A new dataset of 7,787 genuine grade-school level, multiple-choice science questions, assembled to encourage research in advanced question-answering. The dataset is partitioned into a Challenge Set and an Easy Set, where the former contains only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. We are also including a corpus of over 14 million science sentences relevant to the task, and an implementation of three neural baseline models for this dataset. We pose ARC as a challenge to the community.
  • ExplanationBank

    Explanation graphs for 1,680 questionsAristo • 2018A collection of resources for studying explanation-centered inference, including explanation graphs for 1,680 questions, with 4,950 tablestore rows, and other analyses of the knowledge required to answer elementary and middle-school science questions.
All Projects
All Years