AI & Fairness
We are building on AI2's expertise in NLP, computer vision, and engineering to deliver a tangible positive impact on fairness.
AI2 is committed to diversity, equity, and inclusion.
Read about ethical guidelines for crowdsourcing from AI2.
Recent Papers
Measuring the Carbon Intensity of AI in Cloud Instances
Jesse Dodge, Taylor Prewitt, Rémi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, A. Luccioni, Noah A. Smith, Nicole DeCario, Will BuchananFAccT • 2022 The advent of cloud computing has provided people around the world with unprecedented access to computational power and enabled rapid growth in technologies such as machine learning, the computational demands of which incur a high energy cost and a…Gender trends in computer science authorship
Lucy Lu Wang, Gabriel Stanovsky, Luca Weihs, Oren EtzioniCACM • 2021 A comprehensive and up-to-date analysis of Computer Science literature (2.87 million papers through 2018) reveals that, if current trends continue, parity between the number of male and female authors will not be reached in this century. Under our most…Green AI
Roy Schwartz, Jesse Dodge, Noah A. Smith, Oren EtzioniCACM • 2020 The computations required for deep learning research have been doubling every few months, resulting in an estimated 300,000x increase from 2012 to 2018 [2]. These computations have a surprisingly large carbon footprint [38]. Ironically, deep learning was…Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations
Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, Vicente OrdonezICCV • 2019 In this work, we present a framework to measure and mitigate intrinsic biases with respect to protected variables --such as gender-- in visual recognition tasks. We show that trained models significantly amplify the association of target labels with gender…The Risk of Racial Bias in Hate Speech Detection
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, Noah A. SmithACL • 2019 We investigate how annotators’ insensitivity to differences in dialect can lead to racial bias in automatic hate speech detection models, potentially amplifying harm against minority populations. We first uncover unexpected correlations between surface…
“By working arm-in-arm with multiple stakeholders, we can address the important topics rising at the intersection of AI, people, and society.”
— Eric Horvitz
Recent Press
The hidden costs of AI
Axios
October 29, 2019
October 29, 2019
The Efforts to Make Text-Based AI Less Racist and Terrible
Wired
June 17, 2021
June 17, 2021
Artificial Intelligence Can’t Think Without Polluting
The Wire
September 26, 2019
September 26, 2019
At Tech’s Leading Edge, Worry About a Concentration of Power
The New York Times
September 26, 2019
September 26, 2019
Artificial Intelligence Confronts a 'Reproducibility' Crisis
Wired
September 16, 2019
September 16, 2019
המחיר המושתק של בינה מלאכותית (The secret price of artificial intelligence)
ynet
August 12, 2019
August 12, 2019
AI researchers need to stop hiding the climate toll of their work
MIT Tech Review
August 2, 2019
August 2, 2019
Greening AI | New AI2 Initiative Promotes Model Efficiency
Synced
July 31, 2019
July 31, 2019