Papers

Learn more about AI2's Lasting Impact Award
Viewing 261-270 of 1022 papers
  • Quantifying the narrative flow of imagined versus autobiographical stories.

    Maarten Sap, A. Jafarpour, Yejin Choi, Noah A. Smith, J. Pennebaker, E. HorvitzProceedings of the National Academy of Sciences of the United States of America2022 Lifelong experiences and learned knowledge lead to shared expectations about how common situations tend to unfold. Such knowledge of narrative event flow enables people to weave together a story. However, comparable computational tools to evaluate the flow of…
  • Generating Sequences by Learning to Self-Correct

    S. Welleck, Ximing Lu, Peter West, Faeze Brahman, T. Shen, Daniel Khashabi, Yejin ChoiarXiv2022 Sequence generation applications require satisfying semantic constraints, such as ensuring that programs are correct, using certain keywords, or avoiding undesir-able content. Language models, whether fine-tuned or prompted with few-shot demonstrations…
  • Learning to Decompose: Hypothetical Question Decomposition Based on Comparable Texts

    Ben Zhou, Kyle Richardson, Xiaodong Yu, Dan RothEMNLP2022 Explicit decomposition modeling, which involves breaking down complex tasks into more straightforward and often more interpretable sub-tasks, has long been a central theme in developing robust and interpretable NLU systems. However, despite the many datasets…
  • FeedLens: Polymorphic Lenses for Personalizing Exploratory Search over Knowledge Graphs

    Harmanpreet Kaur, Doug Downey, Amanpreet Singh, Evie (Yu-Yen) Cheng, Daniel S. Weld, Jonathan BraggUIST2022 The vast scale and open-ended nature of knowledge graphs (KGs) make exploratory search over them cognitively demanding for users. We introduce a new technique, polymorphic lenses , that improves exploratory search over a KG by obtaining new leverage from the…
  • Threddy: An Interactive System for Personalized Thread-based Exploration and Organization of Scientific Literature

    Hyeonsu B. Kang, Joseph Chee Chang, Yongsung Kim, Aniket KitturUIST2022 Reviewing the literature to understand relevant threads of past work is a critical part of research and vehicle for learning. However, as the scientific literature grows the challenges for users to find and make sense of the many different threads of research…
  • Just-DREAM-about-it: Figurative Language Understanding with DREAM-FLUTE

    Yuling Gu, Yao Fu, Valentina Pyatkin, Ian Magnusson, Bhavana Dalvi Mishra, Peter ClarkEMNLP • The Third Workshop on Figurative Language Processing 2022 Figurative language (e.g., “he flew like the wind”) is challenging to understand, as it is hard to tell what implicit information is being conveyed from the surface form alone. We hypothesize that to perform this task well, the reader needs to mentally…
  • Referee: Reference-Free Sentence Summarization with Sharper Controllability through Symbolic Knowledge Distillation

    Melanie Sclar, Peter West, Sachin Kumar, Yulia Tsvetkov, Yejin ChoiConference on Empirical Methods in Natural Language Processing2022 We present Referee, a novel framework for sentence summarization that can be trained reference-free (i.e., requiring no gold summaries for supervision), while allowing direct control for compression ratio. Our work is the first to demonstrate that reference…
  • SciFact-Open: Towards open-domain scientific claim verification

    David Wadden, Kyle Lo, Bailey Kuehl, Arman Cohan, Iz Beltagy, Lucy Lu Wang, Hannaneh HajishirziEMNLP 20222022 While research on scientific claim verification has led to the development of powerful systems that appear to approach human performance, these approaches have yet to be tested in a realistic setting against large corpora of scientific literature. Moving to…
  • Webly Supervised Concept Expansion for General Purpose Vision Models

    Amita Kamath, Christopher Clark, Tanmay Gupta, Eric Kolve, Derek Hoiem, Aniruddha KembhaviECCV2022 General purpose vision (GPV) systems [25] are models that are designed to solve a wide array of visual tasks without requiring architectural changes. Today, GPVs primarily learn both skills and concepts from large fully supervised datasets. Scaling GPVs to…
  • Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs

    Maarten Sap, Ronan Lebras, Daniel Fried, Yejin ChoiEMNLP2022 Social intelligence and Theory of Mind (T O M), i.e., the ability to reason about the different mental states, intents, and reactions of all people involved, allow humans to effectively navigate and understand everyday social interactions. As NLP systems are…