
GM is bringing Google Gemini-powered AI assistant to cars in 2026
- by TechCrunch
- Oct 22, 2025
- 0 Comments
- 0 Likes Flag 0 Of 5

8:51 AM PDT · October 22, 2025
General Motors will add a conversational AI assistant powered by Google Gemini to its cars, trucks, and SUVs starting next year, the U.S. automaker said Wednesday during an event in New York City.
The Google Gemini rollout is one of several tech-centric announcements made at the automaker’s GM Forward event, and it will be one of the first to get into consumers’ hands. Others, including an overhaul of its electrical architecture and computing platform and an automated driving feature that allows drivers to keep their hands off the wheel and eyes off the road, aren’t coming to GM brands until 2028.
GM is the latest automaker to lean into generative AI-based assistants that promise to respond to driver requests in a more natural-sounding way. Stellantis is collaborating with French AI firm Mistral, Mercedes is integrating ChatGPT, and Tesla has brought xAI’s Grok to its vehicles.
GM’s integration with Gemini is the next logical step for the automaker. Vehicles produced by GM brands Buick, Chevrolet, Cadillac, and GMC already have “Google built-in,” an operating system that gives drivers access to Google Assistant, Google Maps, and other apps directly from the car’s infotainment screen. In 2023, Google began using Google Cloud’s Dialogflow chatbot to handle non-emergency OnStar features, including common driver queries like routing and navigation assistance.
GM’s Gemini-powered AI assistant will have similar levels of capability — it’ll just perform better, according to Dave Richardson, senior vice president of software and services.
“One of the challenges with current voice assistants is that, if you’ve used the, you’ve probably also been frustrated by them because they’re trained on certain code words or they don’t understand accents very well or if you don’t say it quite right, you don’t get the right response,” Richardson told TechCrunch. “What’s great about large language models is they don’t seem to be affected by that. They have context about previous conversations that they can bring up. They’re flexible in how you speak to them…so overall you’re getting a better, more natural experience.”
That might make drafting and sending messages, planning routes with additional stops (like a charging station or a favorite coffee shop), or even prepping for a meeting on the go, a more pain-free experience. The assistant will also have access to the web to be able to answer certain questions, like “What’s the history of this bridge I’m driving over?”
Techcrunch event
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.