Is Your AI Racist?
(Maybe. Let's find out in 5 minutes.)
Raise your hand if you've ever been recommended something online - a video, a song, an ad.
Here's the twist...
That algorithm recommending your videos? It's not neutral. It was built by humans, trained on human data, and it reflects human bias — whether anyone meant it to or not. Here are two real examples of algorithms causing real harm:
- Amazon built a hiring AI that learned to downrank resumes containing the word "women's" (like "women's chess club") because it was trained on 10 years of mostly male hires.
- Facial recognition used by US police departments misidentified Black faces up to 35 times more often than white faces, leading to wrongful arrests.
So the question isn't "is the computer evil?" It's "where did the bias come from?"
Computing Bias, Defined
Computing bias = when a system produces unfair or skewed results because of flawed data or flawed design.
Bias has three culprits. Every single time:
Honors Class AI Scenario
A school uses an AI to decide who gets into the honors class. The AI was trained on 10 years of past data - but historically, most honors students were athletes and STEM students.
What's happening here?
Split the room. Left side vs. right side. Vote: