Poster image for Steven Spielberg’s AI. (Detail.)
The bias problem with Elon Musk’s Grok AI chatbot was easy to see last summer when it started spewing antisemitism and calling itself “MechaHitler.” This episode came into being because Musk reportedly did not like some correct answers that Grok generated about right-wing political violence, so Musk and his team reset the parameters for Grok, instructing it to discount the traditional mainstream media and “not shy away from making claims which are politically incorrect.” The result was an explicitly antisemitic AI.
It’s an extreme case, to be sure – but it should raise some bigger questions about the limits of this still emerging technology. One might assume that as long as there isn’t someone strongly ideological like Elon Musk manipulating the settings behind the scene, an AI model’s responses are purely fact based. That would be wrong. Large Language Models like ChatGPT, Claude, Gemini, and DeepSeek are all biased. In fact, they can be said to be born biased.
This fact was made clear in recently published research by scholars from the University of Oxford and the University of Kentucky, who were able to reveal biases buried in ChatGPT that are much more subtle than Grok praising Adolf Hitler. As one commentator noted, some of these biases seem to stem from anti-Black racist ideas: Mississippi, the state with the largest share of its population that is Black, is rated as the laziest state, while African nations are rated the least intelligent countries. The limited data available on AI systems harming particular groups suggests that race-based harms are the most common.
Why is this so? Consider the old computer science saying: garbage in, garbage out. If one inputs bad data into a computer program, then the computer output will also be bad. AI chatbots like ChatGPT are trained on massive troves of documents created by human beings, some of whom are antisemitic, racist, and/or sexist. All human biases are within the generative database of these AI products. One doesn’t need an Elon Musk for these biases to influence the output from AI.
These types of AI are good at stereotyping because, in essence, that is what they are designed to do. The scholars from Oxford and the University of Kentucky write about the “averaging bias” of AI systems. When an AI model creates an averaged response based on the available documents in the training data, it is basically stereotyping. ChatGPT and other similar models are not designed to find the truth; they are designed to produce stereotypical output that would be agreeable to a typical user.
For example, a different analysis of AI generated images based on a prompt calling for an image of “a wealthy person in Africa” showed Black men in rural areas, often in front of thatched huts. Over 40 percent of the sub-Saharan African population lives in urban areas; several African cities have populations in the millions. It is highly doubtful that African millionaires and billionaires live in thatched huts in rural communities, yet that was the AI output – which might meet the stereotypic expectations of many users and be perceived as a “good” image by them, but it is far from reality.
Inaccurate stereotypes matter because they can lead to mistreatment and bad policies. But the immediate harms of AI systems go beyond generating stereotypes. For example, the Federal Trade Commission banned Rite Aid from using its AI facial recognition system to identify shoplifters after it generated “thousands of false-positive matches” primarily in Black and Asian American communities and “subjected consumers to embarrassment, harassment, and other harm.” In 2023, the US Customs and Border Protection required the use of an app for asylum applications that did not work well with people with dark skin color. Since using the app was required, it prevented many Haitians and Africans from applying for asylum. Navy Federal Credit Union’s automated underwriting system has been accused of racial bias against Black mortgage applicants. As the use of AI systems proliferates, so do the opportunities for biased output.
There are computer scientists who think that the bias problems in AI systems can be fixed with a few instructions to counteract overt bias. But the evidence suggests that the problem of bias can’t completely be fixed. The only method to counteract AI bias is the same one we use to counteract human bias: strong and effective diversity, equity, and inclusion policies. New technologies that merely automate existing biases risk embedding racism even more deeply in our society.
This first appeared in the Detroit News.
Algernon Austin, a senior research fellow at the Center for Economic and Policy Research, has conducted research and writing on issues of race and racial inequality for over 20 years. His primary focus has been on the intersection of race and the economy.
It’s Elon Musk’s World Now
Mar 27, 2026
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Energy




