Papers
See AI2's Award Winning Papers
Learn more about AI2's Lasting Impact Award
Viewing 191-200 of 292 papers
Easy, Reproducible and Quality-Controlled Data Collection with Crowdaq
Qiang Ning, Hao Wu, Pradeep Dasigi, Dheeru Dua, Matt Gardner, IV RobertL.Logan, Ana Marasović, Z. NieEMNLP • Demo • 2020 High-quality and large-scale data are key to success for AI systems. However, large-scale data annotation efforts are often confronted with a set of common challenges: (1) designing a user-friendly annotation interface; (2) training enough annotators…Grounded Compositional Outputs for Adaptive Language Modeling
Nikolaos Pappas, Phoebe Mulcaire, Noah A. SmithEMNLP • 2020 Language models have emerged as a central component across NLP, and a great deal of progress depends on the ability to cheaply adapt them (e.g., through finetuning) to new domains and tasks. A language model's vocabulary---typically selected before training…IIRC: A Dataset of Incomplete Information Reading Comprehension Questions
James Ferguson, Matt Gardner. Hannaneh Hajishirzi, Tushar Khot, Pradeep DasigiEMNLP • 2020 Humans often have to read multiple documents to address their information needs. However, most existing reading comprehension (RC) tasks only focus on questions for which the contexts provide all the information required to answer them, thus not evaluating a…Improving Compositional Generalization in Semantic Parsing
Inbar Oren, Jonathan Herzig, Nitish Gupta, Matt Gardner, Jonathan BerantFindings of EMNLP • 2020 Generalization of models to out-of-distribution (OOD) data has captured tremendous attention recently. Specifically, compositional generalization, i.e., whether a model generalizes to new structures built of components observed during training, has sparked…Learning from Task Descriptions
Orion Weller, Nick Lourie, Matt Gardner, Matthew PetersEMNLP • 2020 Typically, machine learning systems solve new tasks by training on thousands of examples. In contrast, humans can solve new tasks by reading some instructions, with perhaps an example or two. To take a step toward closing this gap, we introduce a framework…MedICaT: A Dataset of Medical Images, Captions, and Textual References
Sanjay Subramanian, Lucy Lu Wang, Sachin Mehta, Ben Bogin, Madeleine van Zuylen, Sravanthi Parasa, Sameer Singh, Matt Gardner, Hannaneh HajishirziFindings of EMNLP • 2020 Understanding the relationship between figures and text is key to scientific document understanding. Medical figures in particular are quite complex, often consisting of several subfigures (75% of figures in our dataset), with detailed text describing their…MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics
Anthony Chen, Gabriel Stanovsky, S. Singh, Matt GardnerEMNLP • 2020 Posing reading comprehension as a generation problem provides a great deal of flexibility, allowing for open-ended questions with few restrictions on possible answers. However, progress is impeded by existing generation metrics, which rely on token overlap…Multilevel Text Alignment with Cross-Document Attention
Xuhui Zhou, Nikolaos Pappas, Noah A. SmithEMNLP • 2020 Text alignment finds application in tasks such as citation recommendation and plagiarism detection. Existing alignment methods operate at a single, predefined level and cannot learn to align texts at, for example, sentence and document levels. We propose a…Multi-Step Inference for Reasoning over Paragraphs
Jiangming Liu, Matt Gardner, Shay B. Cohen, Mirella LapataEMNLP • 2020 Complex reasoning over text requires understanding and chaining together free-form predicates and logical connectives. Prior work has largely tried to do this either symbolically or with black-box transformers. We present a middle ground between these two…Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs
Ana Marasović, Chandra Bhagavatula, J. Park, Ronan Le Bras, Noah A. Smith, Yejin ChoiFindings of EMNLP • 2020 Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on gradients or attention weights. We present the first study…