Papers

Learn more about AI2's Lasting Impact Award
Viewing 441-450 of 996 papers
  • Question Decomposition with Dependency Graphs

    Matan Hasson, Jonathan BerantAKBC2021 QDMR is a meaning representation for complex questions, which decomposes questions into a sequence of atomic steps. While stateof-the-art QDMR parsers use the common sequence-to-sequence (seq2seq) approach, a QDMR structure fundamentally describes labeled…
  • All That’s ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text

    Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, Noah A. SmithACL2021 Human evaluations are typically considered the gold standard in natural language generation, but as models' fluency improves, how well can evaluators detect and judge machine-generated text? We run a study assessing non-experts' ability to distinguish between…
  • Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies

    Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, Jonathan BerantTACL2021 A key limitation in current datasets for multi-hop reasoning is that the required steps for answering the question are mentioned in it explicitly. In this work, we introduce STRATEGYQA, a question answering (QA) benchmark where the required reasoning steps…
  • Edited Media Understanding Frames: Reasoning about the Intent and Implications of Visual Disinformation

    Jeff Da, Maxwell Forbes, Rowan Zellers, Anthony Zheng, Jena D. Hwang, Antoine Bosselut, Yejin ChoiACL2021 Multimodal disinformation, from `deepfakes' to simple edits that deceive, is an important societal problem. Yet at the same time, the vast majority of media edits are harmless -- such as a filtered vacation photo. The difference between this example, and…
  • Effective Attention Sheds Light On Interpretability

    Kaiser Sun and Ana MarasovićFindings of ACL2021 An attention matrix of a transformer selfattention sublayer can provably be decomposed into two components and only one of them (effective attention) contributes to the model output. This leads us to ask whether visualizing effective attention gives different…
  • Explaining NLP Models via Minimal Contrastive Editing (MiCE)

    Alexis Ross, Ana Marasović, Matthew E. PetersFindings of ACL2021 Humans give contrastive explanations that explain why an observed event happened rather than some other counterfactual event (the contrast case). Despite the important role that contrastivity plays in how people generate and evaluate explanations, this…
  • Explaining Relationships Between Scientific Documents

    Kelvin Luu, Xinyi Wu, Rik Koncel-Kedziorski, Kyle Lo, Isabel Cachola, Noah A. SmitACL2021 We address the task of explaining relationships between two scientific documents using natural language text. This task requires modeling the complex content of long technical documents, deducing a relationship between these documents, and expressing that…
  • Few-Shot Question Answering by Pretraining Span Selection

    Ori Ram, Yuval Kirstain, Jonathan Berant, A. Globerson, Omer LevyACL2021 In a number of question answering (QA) benchmarks, pretrained models have reached human parity through fine-tuning on an order of 100,000 annotated questions and answers. We explore the more realistic few-shot setting, where only a few hundred training…
  • How effective is BERT without word ordering? Implications for language understanding and data privacy

    Jack Hessel, Alexandra SchofieldACL2021 Ordered word sequences contain the rich structures that define language. However, it’s often not clear if or how modern pretrained language models utilize these structures. We show that the token representations and self-attention activations within BERT are…
  • Neural Extractive Search

    Shaul Ravfogel, Hillel Taub-Tabib, Yoav GoldbergACL • Demo Track2021 Domain experts often need to extract structured information from large corpora. We advocate for a search paradigm called “extractive search”, in which a search query is enriched with capture-slots, to allow for such rapid extraction. Such an extractive search…