Papers

Learn more about AI2's Lasting Impact Award
Viewing 481-490 of 991 papers
  • XOR QA: Cross-lingual Open-Retrieval Question Answering

    Akari Asai, Jungo Kasai, J. Clark, Kenton Lee, Eunsol Choi, Hannaneh HajishirziNAACL2021 Multilingual question answering tasks typically assume that answers exist in the same language as the question. Yet in practice, many languages face both information scarcity—where languages have few reference articles—and information asymmetry—where…
  • Probing Contextual Language Models for Common Ground with Visual Representations

    Gabriel Ilharco, Rowan Zellers, Ali Farhadi, Hannaneh HajishirziNAACL2021 The success of large-scale contextual language models has attracted great interest in probing what is encoded in their representations. In this work, we consider a new question: to what extent contextual representations of concrete nouns are aligned with…
  • Simplified Data Wrangling with ir_datasets

    Sean MacAvaney, Andrew Yates, Sergey Feldman, Doug Downey, Arman Cohan, Nazli GoharianarXiv2021 Managing the data for Information Retrieval (IR) experiments can be challenging. Dataset documentation is scattered across the Internet and once one obtains a copy of the data, there are numerous different data formats to work with. Even basic formats can…
  • Augmenting Scientific Papers with Just-in-Time, Position-Sensitive Definitions of Terms and Symbols

    Andrew Head, Kyle Lo, Dongyeop Kang, Raymond Fok, Sam Skjonsberg, Daniel S. Weld, Marti A. HearstCHI2021 Despite the central importance of research papers to scientific progress, they can be difficult to read. Comprehension is often stymied when the information needed to understand a passage resides somewhere else—in another section, or in another paper. In this…
  • Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance

    Gagan Bansal, Tongshuang (Sherry) Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Túlio Ribeiro, Daniel S. WeldCHI2021 Many researchers motivate explainable AI with studies showing that human-AI team performance on decision-making tasks improves when the AI explains its recommendations. However, prior studies observed improvements from explanations only when the AI, alone…
  • What Do We Mean by “Accessibility Research”?: A Literature Survey of Accessibility Papers in CHI and ASSETS from 1994 to 2019

    K. Mack, Emma J. McDonnell, Dhruv Jain, Lucy Lu Wang, Jon Froehlich, Leah FindlaterCHI2021 Accessibility research has grown substantially in the past few decades, yet there has been no literature review of the field. To understand current and historical trends, we created and analyzed a dataset of accessibility papers appearing at CHI and ASSETS…
  • DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts

    Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, Yejin ChoiACL2021 Despite recent advances in natural language generation, it remains challenging to control attributes of generated text. We propose DExperts: Decoding-time Experts, a decoding-time method for controlled text generation that combines a pretrained language model…
  • GridTools: A framework for portable weather and climate applications

    Afanasyev, A., M. Bianco, L. Mosimann, C. Osuna, F. Thaler, H. Vogt, O. Fuhrer, J. VandeVondele, and T. C. SchulthessElsevier2021 Weather forecasts and climate projections are of tremendous importance for economical and societal reasons. Software implementing weather and climate models is complex to develop and hard to maintain, and requires a large range of different competencies…
  • DeLighT: Deep and Light-weight Transformer

    Sachin Mehta, Marjan Ghazvininejad, Srini Iyer, Luke Zettlemoyer, Hannaneh HajishirziICLR2021 We introduce a very deep and light-weight transformer, DeLighT, that delivers similar or better performance than transformer-based models with significantly fewer parameters. DeLighT more efficiently allocates parameters both (1) within each Transformer block…
  • MULTIMODALQA: COMPLEX QUESTION ANSWERING OVER TEXT, TABLES AND IMAGES

    Alon Talmor, Ori Yoran, Amnon Catav, Dan Lahav, Yizhong Wang, Akari Asai, Gabriel Ilharco, Hannaneh Hajishirzi, Jonathan BerantICLR2021 When answering complex questions, people can seamlessly combine information from visual, textual and tabular sources. While interest in models that reason over multiple pieces of evidence has surged in recent years, there has been relatively little work on…