Elon Musk’s Criticism of ‘Woke AI’ Suggests ChatGPT Could Be a Trump Administration Target
- by Wired
- Oct 30, 2024
- 0 Comments
- 0 Likes Flag 0 Of 5
Elon Musk
just dragged ChatGPT and other artificial intelligence programs into the Trump crosshairs by repeating his warning that current AI models are too âwokeâ and âpolitically correct.â
âA lot of the AIs that are being trained in the San Francisco Bay Area, they take on the philosophy of people around them,â Musk said at the Future Investment Initiative, a Saudi Arabia governmentâbacked event held in Riyadh this week. âSo you have a woke, nihilisticâin my opinionâphilosophy that is being built into these AIs.â
Although Musk is himself a polarizing figure, he is right about AI systems harboring political biases. The issue, however, is far from one-sided, and Muskâs framing may help further his own interests due to his ties to Trump. Musk runs xAI, a competitor to OpenAI, Google, and Meta that could benefit if those companies become government targets.
âMusk clearly has a close, close relationship with the Trump campaign, and any comment that heâs making will hold a big influence,â says Matt Mittelsteadt, a research fellow at George Mason University. âAt a maximum he could have some sort of seat in a potential Trump administration, and his views could actually be enacted into some sort of policy.â This is an edition of WIRED's AI Lab newsletter by resident AI expert Will Knight. Get it in your inbox every week.
AI models capture political biases because they are trained on swaths of internet data that inevitably includes all sorts of perspectives. Most users may not be aware of any bias in the tools they use because models incorporate guardrails that restrict them from generating certain harmful or biased content. These biases can leak out subtly though, and the additional training that models receive to restrict their output can introduce further partisanship. âDevelopers could ensure that models are exposed to multiple perspectives on divisive topics, allowing them to respond with a balanced viewpoint,â Bang says.
The issue may become worse as AI systems become more pervasive, says Ashique KhudaBukhsh, an computer scientist at the Rochester Institute of Technology who developed a tool called the Toxicity Rabbit Hole Framework, which teases out the different societal biases of large language models. âWe fear that a vicious cycle is about to start as new generations of LLMs will increasingly be trained on data contaminated by AI-generated content,â he says.
âIâm convinced that that bias within LLMs is already an issue and will most likely be an even bigger one in the future,â says Luca Rettenberger, a postdoctoral researcher at the Karlsruhe Institute of Technology who conducted an analysis of LLMs for biases related to German politics.
Rettenberger suggests that political groups may also seek to influence LLMs in order to promote their own views above those of others. âIf someone is very ambitious and has malicious intentions it could be possible to manipulate LLMs into certain directions,â he says. âI see the manipulation of training data as a real danger.â
There have already been some efforts to shift the balance of bias in AI models. Last March, one programmer developed a more right-leaning chatbot in an effort to highlight the subtle biases he saw in tools like ChatGPT. Musk has himself promised to make Grok, the AI chatbot built by xAI, âmaximally truth-seekingâ and less biased than other AI tools, although in practice it also hedges when it comes to tricky political questions. (A staunch Trump supporter and immigration hawk, Muskâs own view of âless biasedâ may also translate into more right-leaning results.)
Next weekâs election in the United States is hardly likely to heal the discord between Democrats and Republicans, but if Trump wins, talk of anti-woke AI could get a lot louder.
Musk offered an apocalyptic take on the issue at this weekâs event, referring to an incident when Googleâs Gemini said that nuclear war would be preferable to misgendering Caitlyn Jenner. âIf you have an AI thatâs programmed for things like that, it could conclude that the best way to ensure nobody is misgendered is to annihilate all humans, thus making the probability of a future misgendering zero,â he said.
You Might Also Like â¦
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Sponsored
Popular Post
Middle-Aged Dentist Bought a Tesla Cybertruck, Now He Gets All the Attention He Wanted
32 ViewsNov 23 ,2024
tesla Model 3 Owner Nearly Stung With $1,700 Bill For Windshield Crack After Delivery
32 ViewsDec 28 ,2024