Papers

Viewing 1-10 of 10 papers
  • Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations

    Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, Vicente OrdonezICCV2019In this work, we present a framework to measure and mitigate intrinsic biases with respect to protected variables --such as gender-- in visual recognition tasks. We show that trained models significantly amplify the association of target labels with gender beyond what one would expect from biased… more
  • The Risk of Racial Bias in Hate Speech Detection

    Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, Noah A. SmithACL2019We investigate how annotators’ insensitivity to differences in dialect can lead to racial bias in automatic hate speech detection models, potentially amplifying harm against minority populations. We first uncover unexpected correlations between surface markers of African American English (AAE) and… more
  • Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets

    Mor Geva, Yoav Goldberg, Jonathan BerantarXiv2019Crowdsourcing has been the prevalent paradigm for creating natural language understanding datasets in recent years. A common crowdsourcing practice is to recruit a small number of high-quality workers, and have them massively generate examples. Having only a few workers generate the majority of… more
  • Evaluating Gender Bias in Machine Translation

    Gabriel Stanovsky, Noah A. Smith, Luke ZettlemoyerACL2019We present the first challenge set and evaluation protocol for the analysis of gender bias in machine translation (MT). Our approach uses two recent coreference resolution datasets composed of English sentences which cast participants into non-stereotypical gender roles (e.g., "The doctor asked the… more
  • Green AI

    Roy Schwartz, Jesse Dodge, Noah A. Smith, Oren EtzioniarXiv2019The computations required for deep learning research have been doubling every few months, resulting in an estimated 300,000x increase from 2012 to 2018 [2]. These computations have a surprisingly large carbon footprint [38]. Ironically, deep learning was inspired by the human brain, which is… more
  • Quantifying Sex Bias in Clinical Studies at Scale With Automated Data Extraction

    Sergey Feldman, Waleed Ammar, Kyle Lo, Elly Trepman, Madeleine van Zuylen, Oren EtzioniJAMA2019Importance: Analyses of female representation in clinical studies have been limited in scope and scale. Objective: To perform a large-scale analysis of global enrollment sex bias in clinical studies. Design, Setting, and Participants: In this cross-sectional study, clinical studies from published… more
  • Gender trends in computer science authorship

    Lucy Lu Wang, Gabriel Stanovsky, Luca Weihs, Oren EtzioniarXiv2019A comprehensive and up-to-date analysis of Computer Science literature (2.87 million papers through 2018) reveals that, if current trends continue, parity between the number of male and female authors will not be reached in this century. Under our most optimistic projection models, gender parity is… more
  • Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them

    Hila Gonen, Yoav GoldbergNAACL2019Word embeddings are widely used in NLP for a vast range of tasks. It was shown that word embeddings derived from text corpora reflect gender biases in society. This phenomenon is pervasive and consistent across different word embedding models, causing serious concern. Several recent works tackle… more
  • Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints

    Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordóñez, Kai-Wei ChangEMNLP2017Language is increasingly being used to define rich visual recognition problems with supporting image collections sourced from the web. Structured prediction models are used in these tasks to take advantage of correlations between co-occurring labels and visual input but risk inadvertently encoding… more
  • Designing AI Systems that Obey Our Laws and Values

    Amitai Etzioni and Oren EtzioniCACM2016Operational AI systems (for example, self-driving cars) need to obey both the law of the land and our values. We propose AI oversight systems ("AI Guardians") as an approach to addressing this challenge, and to respond to the potential risks associated with increasingly autonomous AI systems. These… more