AI & Fairness

AI & Fairness

AI & Fairness

We are building on AI2's expertise in NLP, computer vision, and engineering to deliver a tangible positive impact on fairness.

Over the next few months, we'll be working with renowned researchers and experts to continue shaping this project. Join us!

Leaders

  • Oren Etzioni's Profile PhotoOren EtzioniChief Executive Officer
  • Nicole DeCario's Profile PhotoNicole DeCarioOperations

We are hiring! Please see our current openings.

AI2 is committed to diversity, equity, and inclusion.

Read about ethical guidelines for crowdsourcing from AI2.

  • Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations

    Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, Vicente OrdonezICCV2019In this work, we present a framework to measure and mitigate intrinsic biases with respect to protected variables --such as gender-- in visual recognition tasks. We show that trained models significantly amplify the association of target labels with gender beyond what one would expect from biased… more
  • The Risk of Racial Bias in Hate Speech Detection

    Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, Noah A. SmithACL2019We investigate how annotators’ insensitivity to differences in dialect can lead to racial bias in automatic hate speech detection models, potentially amplifying harm against minority populations. We first uncover unexpected correlations between surface markers of African American English (AAE) and… more
  • Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets

    Mor Geva, Yoav Goldberg, Jonathan BerantarXiv2019Crowdsourcing has been the prevalent paradigm for creating natural language understanding datasets in recent years. A common crowdsourcing practice is to recruit a small number of high-quality workers, and have them massively generate examples. Having only a few workers generate the majority of… more
  • Evaluating Gender Bias in Machine Translation

    Gabriel Stanovsky, Noah A. Smith, Luke ZettlemoyerACL2019We present the first challenge set and evaluation protocol for the analysis of gender bias in machine translation (MT). Our approach uses two recent coreference resolution datasets composed of English sentences which cast participants into non-stereotypical gender roles (e.g., "The doctor asked the… more
  • Green AI

    Roy Schwartz, Jesse Dodge, Noah A. Smith, Oren EtzioniarXiv2019The computations required for deep learning research have been doubling every few months, resulting in an estimated 300,000x increase from 2012 to 2018 [2]. These computations have a surprisingly large carbon footprint [38]. Ironically, deep learning was inspired by the human brain, which is… more
“By working arm-in-arm with multiple stakeholders, we can address the important topics rising at the intersection of AI, people, and society.”
Eric Horvitz

The hidden costs of AI

Axios
October 29, 2019
Read the Article

At Tech’s Leading Edge, Worry About a Concentration of Power

The New York Times
September 26, 2019
Read the Article

Artificial Intelligence Can’t Think Without Polluting

The Wire
September 26, 2019
Read the Article

Artificial Intelligence Confronts a 'Reproducibility' Crisis

Wired
September 16, 2019
Read the Article

המחיר המושתק של בינה מלאכותית (The secret price of artificial intelligence)

ynet
August 12, 2019
Read the Article

AI researchers need to stop hiding the climate toll of their work

MIT Tech Review
August 2, 2019
Read the Article

Greening AI | New AI2 Initiative Promotes Model Efficiency

Synced
July 31, 2019
Read the Article

Amid a rapid rise in AI resource needs, AI2 campaigns to make it easier to be green

GeekWire
July 26, 2019
Read the Article