Papers

Learn more about AI2's Lasting Impact Award
Viewing 11-20 of 221 papers
  • Inference-Time Policy Adapters (IPA): Tailoring Extreme-Scale LMs without Fine-tuning

    Ximing Lu, Faeze Brahman, Peter West, Jaehun Jang, Khyathi Raghavi Chandu, Abhilasha Ravichander, Lianhui Qin, Prithviraj Ammanabrolu, Liwei Jiang, Sahana Ramnath, Nouha Dziri, Jillian R. Fisher, Bill Yuchen Lin, Skyler Hallinan, Xiang Ren, S. Welleck, Yejin ChoiEMNLP2023 Large language models excel at a variety of language tasks when prompted with examples or instructions. Yet controlling these models through prompting alone is limited. Tailoring language models through fine-tuning (e.g., via reinforcement learning) can be…
  • SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization

    Hyunwoo Kim, Jack Hessel, Liwei Jiang, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, Yejin ChoiEMNLP2023 We present SODA : the first publicly available, million-scale high-quality social dialogue dataset. Using SODA , we train COSMO : a generalizable conversation agent outperforming previous best-performing agents on both in- and out-of-domain datasets. In…
  • Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements

    Jiacheng Liu, Wenya Wang, Dianzhuo Wang, Noah A. Smith, Yejin Choi, Hanna HajishirziEMNLP2023 Despite the much discussed capabilities of today's language models, they are still prone to silly and unexpected commonsense failures. We consider a retrospective verification approach that reflects on the correctness of LM outputs, and introduce Vera, a…
  • We're Afraid Language Models Aren't Modeling Ambiguity

    Alisa Liu, Zhaofeng Wu, Julian Michael, Alane Suhr, Peter West, Alexander Koller, Swabha Swayamdipta, Noah A. Smith, Yejin ChoiEMNLP2023 Ambiguity is an intrinsic feature of natural language. Managing ambiguity is a key part of human language understanding, allowing us to anticipate misunderstanding as communicators and revise our interpretations as listeners. As language models (LMs) are…
  • Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals

    Yanai Elazar, Bhargavi Paranjape, Hao Peng, Sarah Wiegreffe, Khyathi Raghavi, Vivek Srikumar, Sameer Singh, Noah A. SmitharXiv2023 The inevitable appearance of spurious correlations in training datasets hurts the generalization of NLP models on unseen data. Previous work has found that datasets with paired inputs are prone to correlations between a specific part of the input (e.g., the…
  • COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements

    Xuhui Zhou, Hao Zhu, Akhila Yerukola, Thomas Davidson, Jena D. Hwang, Swabha Swayamdipta, Maarten SapACL Findings2023 Warning: This paper contains content that may be offensive or upsetting. Understanding the harms and offensiveness of statements requires reasoning about the social and situational context in which statements are made. For example, the utterance"your English…
  • Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts

    Skyler Hallinan, Alisa Liu, Yejin Choi, Maarten SapACL2023 Text detoxification has the potential to miti- 001 gate the harms of toxicity by rephrasing text to 002 remove offensive meaning, but subtle toxicity 003 remains challenging to tackle. We introduce 004 M A RC O , a detoxification algorithm that com- 005 bines…
  • From Dogwhistles to Bullhorns: Unveiling Coded Rhetoric with Language Models

    Julia Mendelsohn, Ronan Le Bras, Yejin Choi, Maarten SapACL2023 Dogwhistles are coded expressions that simultaneously convey one meaning to a broad audience and a second one, often hateful or provocative, to a narrow in-group; they are deployed to evade both political repercussions and algorithmic content moderation. For…
  • NLPositionality: Characterizing Design Biases of Datasets and Models

    Sebastin Santy, Jenny T. Liang, Ronan Le Bras, Katharina Reinecke, Maarten SapACL2023 Design biases in NLP systems, such as performance differences for different populations, often stem from their creator's positionality, i.e., views and lived experiences shaped by identity and background. Despite the prevalence and risks of design biases…
  • ClarifyDelphi: Reinforced Clarification Questions with Defeasibility Rewards for Social and Moral Situations

    Valentina Pyatkin, Jena D. Hwang, Vivek Srikumar, Ximing Lu, Liwei Jiang, Yejin Choi, Chandra BhagavatulaACL2023 Context is everything, even in commonsense moral reasoning. Changing contexts can flip the moral judgment of an action; Lying to a friend is wrong in general, but may be morally acceptable if it is intended to protect their life. We present ClarifyDelphi, an…