Allen Institute for AI

Papers

Learn more about AI2's Lasting Impact Award
Viewing 11-20 of 437 papers
  • Domain-Specific Lexical Grounding in Noisy Visual-Textual Documents

    Gregory Yauney, Jack Hessel, David MimnoEMNLP2020Images can give us insights into the contextual meanings of words, but current imagetext grounding approaches require detailed annotations. Such granular annotation is rare, expensive, and unavailable in most domainspecific contexts. In contrast, unlabeled multiimage, multi-sentence documents are… more
  • Easy, Reproducible and Quality-Controlled Data Collection with Crowdaq

    Qiang Ning, Hao Wu, Pradeep Dasigi, Dheeru Dua, Matt Gardner, IV RobertL.Logan, Ana Marasović, Z. NieEMNLP • Demo2020High-quality and large-scale data are key to success for AI systems. However, large-scale data annotation efforts are often confronted with a set of common challenges: (1) designing a user-friendly annotation interface; (2) training enough annotators efficiently; and (3) reproducibility. To address… more
  • Grounded Compositional Outputs for Adaptive Language Modeling

    Nikolaos Pappas, Phoebe Mulcaire, Noah A. SmithEMNLP2020Language models have emerged as a central component across NLP, and a great deal of progress depends on the ability to cheaply adapt them (e.g., through finetuning) to new domains and tasks. A language model's vocabulary---typically selected before training and permanently fixed later---affects its… more
  • IIRC: A Dataset of Incomplete Information Reading Comprehension Questions

    James Ferguson, Matt Gardner. Hannaneh Hajishirzi, Tushar Khot, Pradeep DasigiEMNLP2020Humans often have to read multiple documents to address their information needs. However, most existing reading comprehension (RC) tasks only focus on questions for which the contexts provide all the information required to answer them, thus not evaluating a system’s performance at identifying a… more
  • Improving Compositional Generalization in Semantic Parsing

    Inbar Oren, Jonathan Herzig, Nitish Gupta, Matt Gardner, Jonathan BerantFindings of EMNLP2020Generalization of models to out-of-distribution (OOD) data has captured tremendous attention recently. Specifically, compositional generalization, i.e., whether a model generalizes to new structures built of components observed during training, has sparked substantial interest. In this work, we… more
  • Learning from Task Descriptions

    Orion Weller, Nick Lourie, Matt Gardner, Matthew PetersEMNLP2020
  • MedICaT: A Dataset of Medical Images, Captions, and Textual References

    Sanjay Subramanian, Lucy Lu Wang, Sachin Mehta, Ben Bogin, Madeleine van Zuylen, Sravanthi Parasa, Sameer Singh, Matt Gardner, Hannaneh HajishirziFindings of EMNLP2020Understanding the relationship between figures and text is key to scientific document understanding. Medical figures in particular are quite complex, often consisting of several subfigures (75% of figures in our dataset), with detailed text describing their content. Previous work studying figures… more
  • MOCHA: A Dataset for Training and Evaluating Generative Reading Comprehension Metrics

    Anthony Chen, Gabriel Stanovsky, S. Singh, Matt GardnerEMNLP2020Posing reading comprehension as a generation problem provides a great deal of flexibility, allowing for open-ended questions with few restrictions on possible answers. However, progress is impeded by existing generation metrics, which rely on token overlap and are agnostic to the nuances of reading… more
  • Multilevel Text Alignment with Cross-Document Attention

    Xuhui Zhou, Nikolaos Pappas, Noah A. SmithEMNLP2020Text alignment finds application in tasks such as citation recommendation and plagiarism detection. Existing alignment methods operate at a single, predefined level and cannot learn to align texts at, for example, sentence and document levels. We propose a new learning approach that equips… more
  • Multi-Step Inference for Reasoning over Paragraphs

    Jiangming Liu, Matt Gardner, Shay B. Cohen, Mirella LapataEMNLP2020Complex reasoning over text requires understanding and chaining together free-form predicates and logical connectives. Prior work has largely tried to do this either symbolically or with black-box transformers. We present a middle ground between these two extremes: a compositional model reminiscent… more