Guest Lecture on,
Old Wine in a New Bottle:
The Enduring Biases of Algorithmic systems
21 February 2024
Dr. Swati Kumari
Institute for science in Society,
Radboud University,
The Netherlands
LECTURE OUTLINE
01 02 03 04
Developing a Looking at Type, sources Strategies & policies
general Lens to look algorithms from the reasons and to deal with
at algorithms general Lens- Biases implications of algorithmic biases.
biases in algorithms.
Issues & Challenges
A LENS TO LOOK AT ALGORITHMS-
What is my Lens to look at something?
-Statistical view of biases in algorithms
-Social view of biases in algorithms
A LENS FOR TODAYS LECTURE-
1- Technology is not autonomous.
2-Technology is not value-neutral.
3- There are embedded biases in
technology.
4-Technology is a ‘’System’’
TECHNOLOGY IS NOT INDEPENDENT OF CONTEXT
Application
Funding Technology
- users,
agency -Design
Market
Example- Nuclear energy program of France & Germany Example- Issues with lab grown meat in Netherlands
Social, political,
economic,
ecological---
Context
TECHNOLOGY IS NOT VALUE-NEUTRAL
What it is ?
Many people assume that technology is simply
a tool—neutral and objective—until humans
use it for good or bad purposes. However, this
is a myth. Technology is designed, developed,
and deployed within specific social, political,
and economic contexts, which means it always
reflects the values, biases, and assumptions of
its creators and users.
Example: Crash Test Dummies & Car Safety – Who Gets Protected?
The Problem:
For decades, crash test dummies were designed based on the average male body—around
5'9" and 160 lbs. Cars were tested for safety based on how this male dummy fared in
crashes.
The Consequence:
•Women, who typically have different body structures (shorter stature, lower muscle mass,
different spinal alignment), were 47% more likely to be seriously injured in a car crash.
•Pregnant women were at an even higher risk, as standard dummies did not account for
pregnancy at all.
•The assumption that a "standard human" was male led to decades of biased safety testing,
endangering millions of people.
Why This Proves Technology Is Not Neutral:
•male default dummy.
•particular body type,
TECHNOLOGY AS A SYSTEM, NOT JUST A NEUTRAL TOOL
Thinking of technology as a system means recognizing that:-
- It is created within specific historical, cultural, and economic contexts.
-It interacts with laws, institutions, and social norms.
-It has built-in biases and constraints that shape how people experience it.
ANALYSING ALGORETHMIC SYSTEMS-
1- Algorithms are not autonomous.
2- Algorithms are not value-neutral.
3- There are embedded biases in Algorithms.
4- Algorithms are part of larger System.
EXAMPLE-1: AI AND IMAGE RECOGNITION: WHO GETS RECOGNIZED?
The Bias: Facial recognition systems often perform worse on women and people
with darker skin tones.
The Design Problem: Many AI models are trained on datasets dominated by
white male faces because they were collected in Western contexts where
developers are predominantly white and male.
Real-World Consequence:
•Studies (e.g., by MIT’s Joy Buolamwini) found that facial recognition accuracy
was over 99% for white men but dropped to 65% for Black women.
•Black individuals have been wrongly arrested due to false matches in AI-driven
policing.
EXAMPLE-2: GOOGLE TRANSLATE REINFORCES GENDER STEREOTYPES
The Bias: Google Translate, when translating gender-neutral languages (like
Turkish or Finnish) into English, often assigns stereotypical genders to
professions.
The Design Problem: The algorithm learns from vast amounts of existing
human text, which reflects societal biases.
Real-World Consequence:
•When translating "O bir doktor" (Turkish: "They are a doctor"), Google
Translate often returns "He is a doctor"—assuming doctors are men.
•Conversely, "O bir hemşire" ("They are a nurse") is translated as "She is a
nurse"—reinforcing gender roles.
EXAMPLE 3: VOICE ASSISTANTS: WHO DO THEY SERVE BEST?
The Bias: Many voice recognition systems (e.g., Siri, Alexa, Google Assistant)
struggle to understand accents, particularly from non-Western, non-white
speakers.
The Design Problem: Training data is often based on standard American and
British English speakers.
Real-World Consequence:
•Voice assistants perform significantly worse for African American Vernacular
English (AAVE) speakers, leading to frustration and exclusion.
•Studies found that voice recognition systems misunderstood Black speakers
nearly twice as often as white speakers.
EXAMPLE 4: HEALTH TECHNOLOGY AND RACIAL BIAS IN MEDICAL DEVICES
The Bias: Pulse oximeters (which measure blood oxygen levels) have been found
to be less accurate for people with darker skin tones.
The Design Problem: The devices rely on light absorption in the skin, but they
were primarily tested on lighter-skinned individuals.
Real-World Consequence:
•During the COVID-19 pandemic, Black patients were more likely to have
their low oxygen levels missed, leading to delayed or inadequate treatment.
•Studies show that pulse oximeters overestimate oxygen levels in Black and
brown patients, potentially leading doctors to underestimate the severity of
their conditions.
EXAMPLE 5: AUTOMATIC SOAP DISPENSERS THAT DON'T WORK FOR DARK SKIN
The Bias: Some automatic soap dispensers fail to detect dark-skinned hands,
while working perfectly for lighter skin.
The Design Problem: The infrared sensors used in these dispensers were tested
primarily on lighter skin tones, and designers did not account for variation.
Real-World Consequence:
•Viral videos have shown people with darker skin struggling to get soap or
water, while those with lighter skin have no issues.
•While it may seem minor, this reflects a broader issue—technology is often
designed for the "default" (lighter skin) user, leaving others excluded.
FINAL REFLECTION: THE MYTH OF NEUTRAL TECHNOLOGY
These examples prove that biases are not accidental glitches—they are
embedded in how technology is designed, tested, and deployed. This happens
because of:
Who designs the technology? (Is the team diverse?)
What data is used? (Is it representative of different groups?)
How is the system tested? (Does it account for real-world diversity?)
TYPES OF BIAS IN ALGORITHMS
❑ Data Bias (Sampling Bias)
Occurs when the training data does not represent the full diversity of the
population.
❑ Label Bias
Happens when the labels used in training data reflect historical prejudices
or human errors.
❑ Proxy Bias
Occurs when an algorithm uses a seemingly neutral variable that acts as a
stand-in (proxy) for a sensitive characteristic (e.g., race, gender, socioeconomic
status).
❑ Confirmation Bias
Happens when an algorithm reinforces existing patterns in data instead of
challenging them.
❑ Interaction Bias
Occurs when an algorithm learns from real-time user behavior, which can
introduce biases over time.
SOURCES OF ALGORITHMIC BIAS
1. Biased Training Data – If an AI system learns from biased data, it will
reproduce and amplify those biases.
2. Subjective Design Choices – Developers make assumptions when choosing
which data to include, how to weigh different factors, and how to define "success.“
3. Lack of Diversity in AI Development Teams – Homogeneous teams may not
recognize biases affecting underrepresented groups.
4. Feedback Loops – Biased decisions create more biased data, reinforcing
discrimination over time.
5. Business or Political Interests – Algorithms often optimize for profit or political
gain, rather than fairness.
REASONS WHY BIAS PERSISTS IN ALGORITHMS
"Historical Data Reflects Social Inequalities"
•AI learns from real-world data, which is often deeply biased due to historical discrimination
(e.g., hiring, policing, lending).
"AI Sees Correlation, Not Context"
•Algorithms find statistical patterns but lack human judgment to understand fairness or ethical
concerns.
"Optimization for Efficiency, Not Fairness"
•Many algorithms prioritize accuracy or profit over fairness (e.g., loan approvals optimizing for
bank revenue rather than equal access).
"Lack of Transparency"
•Many AI systems are "black boxes", making it difficult to detect and correct biases.t
"Bias is Hard to Detect Until It Causes Harm"
•Bias often goes unnoticed until it harms marginalized groups—at which point, fixing it
becomes more difficult.
IMPLICATIONS OF ALGORITHMIC BIAS
1. Discrimination & Social Inequality
2. Reinforcement of Historical Injustices
3. Political Manipulation & Misinformation
4. Loss of Public Trust in AI
STRATEGIES & POLICIES TO ADDRESS BIASES IN ALGORITHIMS
1. Corporate approach-
Internal ethical teams
ML Bias detection tools
Explainable AI, Trustworthy AI, Ethical AI
2. Government approach-
Regulations
Guarding rights of citizens & promoting innovations
EU AI REGULATION LAW
What does it say about the biases in algorithms?
Knowledge & skill gap between the policy makers
and technology developers.
ISSUES & CHALLENGES
1. Question of awareness
2. Decision making
3. Competition
4. Lack of Methods
DISCUSSION-
How can we solve the wicked problem of biases in algorithms?
Thank you !