The following collection is meant to serve as a reference for engineers, data scientists, and others making decisions about building technological solutions for real-world problems. Hopefully, this will help us avoid repeating mistakes of the past by informing the design of new systems or the decision not to build a technological solution at all.
This is a living document, so please send suggestions for additions through “Issues” or feel free to send pull requests. If you find any other problems with links or the articles themselves, please also open an “Issue”.
Fairness
Lending & Credit approval
- Gender Bias Complaints against Apple Card Signal a Dark Side to Fintech
- Exploring Racial Discrimination in Mortgage Lending: A Call for Greater Transparency
- DFS Issues Guidance to Life Insurers on Use of “External Data” in Underwriting Decisions
Hiring
- Amazon scraps secret AI recruiting tool that showed bias against women
- Automated Employment Discrimination
- Help wanted: an examination of hiring algorithms, equity, and bias
- All the Ways Hiring Algorithms Can Introduce Bias
- Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices
- Help Wanted - An Examination of Hiring Algorithms, Equity, and Bias
- Wanted: The ‘perfect babysitter.’ Must pass AI scan for respect and attitude.
- Job Screening Service Halts Facial Analysis of Applicants
Employee evaluation
- Houston Schools Must Face Teacher Evaluation Lawsuit
- How Amazon automatically tracks and fires warehouse workers for ‘productivity’
- Court Rules Deliveroo Used ‘Discriminatory’ Algorithm
Pre-trial risk assessment and criminal sentencing
- Machine Bias
- How We Analyzed the COMPAS Recidivism Algorithm
- GitHub repository for COMPAS analysis
- Can you make AI fairer than a judge? Play our courtroom algorithm game
Predictive Policing & Other Law Enforcement Use Cases
- Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice
- Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots
- The Perpetual Line-Up - Unregulated police face recognition in America
- Stuck in a Pattern: Early evidence on “predictive policing” and civil rights
- Crime-prediction tool PredPol amplifies racially biased policing, study shows
- Criminal machine learning
- The Liar’s Walk - Detecting Deception with Gait and Gesture
- Federal study confirms racial bias of many facial-recognition systems, casts doubt on their expanding use
- Return of physiognomy? Facial recognition study says it can identify criminals from looks alone
- Live facial recognition is tracking kids suspected of being criminals
Admissions
School Choice
Speech Detection
- Oh dear… AI models used to flag hate speech online are, er, racist against black people
- The Risk of Racial Bias in Hate Speech Detection
- Toxicity and Tone Are Not The Same Thing: analyzing the new Google API on toxicity, PerspectiveAPI.
- Voice Is the Next Big Platform, Unless You Have an Accent
- Google’s speech recognition has a gender bias
- Fair Speech report by Stanford Computational Policy Lab, also covered in Speech recognition algorithms may also have racial bias
- Automated moderation tool from Google rates People of Color and gays as “toxic”
- Someone made an AI that predicted gender from email addresses, usernames. It went about as well as expected
Image Labelling & Face Recognition
- Google Photos identified two black people as ‘gorillas’
- When It Comes to Gorillas, Google Photos Remains Blind
- The viral selfie app ImageNet Roulette seemed fun – until it called me a racist slur
- Google Is Investigating Why it Trained Facial Recognition on ‘Dark Skinned’ Homeless People
- Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
- Machines Taught by Photos Learn a Sexist View of Women
- Tenants sounded the alarm on facial recognition in their buildings. Lawmakers are listening.
- Google apologizes after its Vision AI produced racist results
- When AI Sees a Man, It Thinks ‘Official.’ A Woman? ‘Smile’
Public Benefits & Health
- A health care algorithm affecting millions is biased against black patients
- What happens when an algorithm cuts your health care
- China Knows How to Take Away Your Health Insurance
- Foretelling the Future: A Critical Perspective on the Use of Predictive Analytics in Child Welfare
- There’s no quick fix to find racial bias in health care algorithms
- Health algorithms discriminate against Black patients, also in Switzerland
Ads
- Discrimination in Online Ad Delivery
- Probing the Dark Side of Google’s Ad-Targeting System
- Facebook Engages in Housing Discrimination With Its Ad Practices, U.S. Says
- Facebook Job Ads Raise Concerns About Age Discrimination
- Facebook Ads Can Still Discriminate Against Women and Older Workers, Despite a Civil Rights Settlement
- Women less likely to be shown ads for high-paid jobs on Google, study shows
- Algorithms That “Don’t See Color”: Comparing Biases in Lookalike and Special Ad Audiences
- Facebook is letting job advertisers target only men
- Facebook (Still) Letting Housing Advertisers Exclude Users by Race
Search
- Algorithms of Oppression: How Search Engines reinforce racism
- Bias already exists in search engine results, and it’s only going to get worse
- Truth in pictures: What Google image searches tell us about inequality at work
Translations
Jury Selection
Dating
- Coffee Meets Bagel: The Online Dating Site That Helps You Weed Out The Creeps
- The Biases we feed to Tinder algorithms
- Redesign dating apps to lessen racial bias, study recommends
Word Embeddings
Word Embeddings may affect many of the categories above through applications that use them.
Gerrymandering
Recommender systems
Picking areas for improved service
Safety
Self-driving cars
- Remember the Uber self-driving car that killed a woman crossing the street? The AI had no clue about jaywalkers
- Franken-algorithms: The Deadly Consequences of Unpredictable Code
Weaponized AI
- Google employee protest: Now Google backs off Pentagon drone AI project
- Google Wants to Do Business With the Military—Many of Its Employees Don’t
Health
- Model interpretability in Medicine
- Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission shows importance of model interpretability for such critical decisions.
- Rich Caruana–Friends Don’t Let Friends Release Black Box Models in Medicine
- IBM pitched its Watson supercomputer as a revolution in cancer care. It’s nowhere close
- International evaluation of an AI system for breast cancer screening - This thread examines the issues with the problem setting.
Privacy
Machine Learning-based privacy attacks
Lending
Work
Prison tech
Location data
- Twelve Million Phones, One Dataset, Zero Privacy shines a light on data privacy (or the lack thereof). That same data may be used to for ML as well.
- Tenants sounded the alarm on facial recognition in their buildings. Lawmakers are listening.
- Face for sale: Leaks and lawsuits blight Russia facial recognition
Social media & dating
- OkCupid Study Reveals the Perils of Big-Data Science
- “We Are the Product”: Public Reactions to Online Data Sharing and Privacy Controversies in the Media
Basic anonymization as an insufficient measure
Health
- Health Insurers Are Vacuuming Up Details About You — And It Could Raise Your Rates
- How Your Medical Data Fuels a Hidden Multi-Billion Dollar Industry
- 23andMe’s Pharma Deals Have Been the Plan All Along
- If You Want Life Insurance, Think Twice Before Getting A Genetic Test
- Medical Start-up Invited Millions Of Patients To Write Reviews They May Not Realize Are Public. Some Are Explicit.
- Help Desk: Can your medical records become marketing? We investigate a reader’s suspicious ‘patient portal.’
- Is your pregnancy app sharing your intimate data with your boss?
- Data Crisis: Who Owns Your Medical Records?
- This Bluetooth Tampon Is the Smartest Thing You Can Put In Your Vagina didn’t mention the privacy concerns of such a device. This Twitter comment adds the necessary comment.
Face Recognition
- Clearview AI: We Are ‘Working to Acquire All U.S. Mugshots’ From Past 15 Years
- Face for sale: Leaks and lawsuits blight Russia facial recognition
Supply of goods
Anti-Money Laundering
General resources about Responsible AI
Many of the books and articles in this area cover a wide range of topics. Below is a list of a few of them, sorted alphabetically by title:
- A Hippocratic Oath for artificial intelligence practitioners by Oren Etzioni
- Algorithms, Correcting Biases by Cass Sunstein
- Algorithms of Oppression - How Search Engines Reinforce Racism by Safiya Umoja Noble
- Artificial Unintelligence - How Computers Misunderstand the World by Meredith Broussard
- Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor by Virginia Eubanks
- Big Data’s Disparate Impact by Solon Barocas and Andrew D. Selbst
- Datasheets for Datasets by Timnit Gebru et al.
- Design Justice by Sasha Costanza-Chock
- Fairness and Abstraction in Sociotechnical Systems by Andrew D. Selbst, danah boyd, Sorelle Friedler, Suresh Venkatasubramanian, Janet Vertesi
- Fairness and machine learning - Limitations and Opportunities by Solon Barocas, Moritz Hardt, Arvind Narayanan
- How I’m fighting bias in algorithms by Joy Buolamwini
- Race after Technology by Ruha Benjamin
- Interpretable Machine Learning by Christoph Molnar
- Tech Ethics Curriculum by Casey Fiesler
- The Measure and Mismeasure of Fairness: A Critical Review of Machine Learning by Sam Corbett-Davis and Sharad Goel
- Weapons of Math Destruction by Cathy O’Neil