Current:Home > reviewsIs AI racially biased? Study finds chatbots treat Black-sounding names differently -Blueprint Wealth Network
Is AI racially biased? Study finds chatbots treat Black-sounding names differently
View
Date:2025-04-14 07:41:56
Planning to turn to a chatbot for advice? A new study warns that its answer may vary based on how Black the user's name sounds.
A recent paper from researchers at Stanford Law School found “significant disparities across names associated with race and gender” from chatbots like OpenAI’s ChatGPT 4 and Google AI’s PaLM-2. For example, a chatbot may say a job candidate with a name like Tamika should be offered a $79,375 salary as a lawyer, but switching the name to something like Todd boosts the suggested salary offer to $82,485.
The authors highlight the risks behind these biases, especially as businesses incorporate artificial intelligence into their daily operations – both internally and through customer-facing chatbots.
“Companies put a lot of effort into coming up with guardrails for the models,” Stanford Law School professor Julian Nyarko, one of the study’s co-authors, told USA TODAY. “But it's pretty easy to find situations in which the guardrails don't work, and the models can act in a biased way.”
Biases found across various scenarios
The paper, published last month, asked AI chatbots for advice on five different scenarios to discern potential stereotypes:
◾ Purchases: Questions on how much to spend when purchasing a house, bike, or car.
◾ Chess: Questions on a player’s odds of winning a match.
◾ Public office: Asking for predictions on a candidate’s chance of winning an election.
◾ Sports: Asking for input on how high to rank a player in a list of 100 athletes.
◾ Hiring: Asking for advice on how big of a salary to offer a job candidate.
The study found most scenarios displayed biases that were disadvantageous to Black people and women. The only consistent exception was when asking for input on an athlete’s position as a basketball player; in this scenario, the biases were in favor of Black athletes.
The findings suggest that the AI models encode common stereotypes based on the data they are trained on, which influences their response.
A 'systemic issue' among AI chatbots
The paper points out that, unlike previous studies, this research was done via an audit analysis, which is designed to measure the level of bias in different domains of society like housing and employment.
Nyarko said the research was inspired by similar analyses, like the famous 2003 study where researchers looked into hiring biases by submitting the same resume under both Black- and white-sounding names and found “significant discrimination” against Black-sounding names.
In the AI study, researchers would repeatedly pose questions to chatbots like OpenAI’s GPT-4, GPT-3.5 and Google AI’s PaLM-2, changing only the names referenced in the query. Researchers used white male-sounding names like Dustin and Scott; white female-sounding names like Claire and Abigail; Black male-sounding names like DaQuan and Jamal; and Black female-sounding names like Janae and Keyana.
The AI chatbots’ advice, according to the findings, “systematically disadvantages names that are commonly associated with racial minorities and women,” with names associated with Black women receiving the “least advantageous” outcomes.
Researchers found that biases were consistent across 42 prompt templates and several AI models, “indicating a systemic issue."
An emailed statement from OpenAI said bias is an “important, industry-wide problem” that its safety team is working to combat.
"(We are) continuously iterating on models to improve performance, reduce bias, and mitigate harmful outputs,” the statement reads.
Google did not respond to a request for comment.
First step: 'Just knowing that these biases exist'
Nyarko said the first step AI companies should take to address these risks is “just knowing that these biases exist” and to keep testing for them.
However, researchers also acknowledged the argument that certain advice should differ across socio-economic groups. For example, Nyarko said it might make sense for a chatbot to tailor financial advice based on the user's name since there is a correlation between affluence and race and gender in the U.S.
Afraid of AI?Here's how to get started and use it to make your life easier
“It might not necessarily be a bad thing if a model gives more conservative investment advice to someone with a Black-sounding name, assuming that person is less wealthy,” Nyarko said. "So it doesn't have to be a terrible outcome, but it's something that we should be able to know and something that we should be able to mitigate in situations where it's not desirable."
veryGood! (775)
Related
- Toyota to invest $922 million to build a new paint facility at its Kentucky complex
- Parts of a Martin Luther King Jr. memorial in Denver have been stolen
- Pandas to return to San Diego Zoo, China to send animals in move of panda diplomacy
- Man driving stolen U-Haul and fleeing cops dies after crashing into river
- Former longtime South Carolina congressman John Spratt dies at 82
- Trial to determine if Texas school’s punishment of a Black student over his hair violates new law
- Amazon Prime Video lawsuit seeks class action status over streamer's 'ad-free' rate change
- In wake of mass shooting, here is how Maine’s governor wants to tackle gun control and mental health
- Former longtime South Carolina congressman John Spratt dies at 82
- Wait for Taylor Swift merch in Australia longer than the actual Eras Tour concert
Ranking
- Nearly half of US teens are online ‘constantly,’ Pew report finds
- AT&T’s network is down, here’s what to do when your phone service has an outage
- Yale wants you to submit your test scores. University of Michigan takes opposite tack.
- Bears QB Justin Fields explains why he unfollowed team on Instagram
- Louvre will undergo expansion and restoration project, Macron says
- Mayorkas meets with Guatemalan leader Arévalo following House impeachment over immigration
- Machine Gun Kelly Shares Heartbreaking Message on Megan Fox’s Miscarriage
- United flight diverted to Chicago due to reported bomb threat
Recommendation
Krispy Kreme offers a free dozen Grinch green doughnuts: When to get the deal
The Excerpt podcast: The ethics of fast fashion should give all of us pause
Shift to EVs could prevent millions of kid illnesses by 2050, report finds
Mudslides shut down portions of California's Pacific Coast Highway after heavy rainfall
Most popular books of the week: See what topped USA TODAY's bestselling books list
A Los Angeles woman was arrested in Russia on charges of treason. Here’s what we know
‘Little dark secret': DEA agent on trial accused of taking $250K in bribes from Mafia
Top NBA free agents for 2024: Some of biggest stars could be packing bags this offseason