![](https://www.elonmuskvision.com/public/blog/img/ol-tesla.png)
![Tesla’s Dojo, a timeline](https://elonmuskvision.com/public/uploads/all/323560f8e2c9fbf3c61adf1b86445bb8.jpg)
He also posts:
“Of the roughly $10B in AI-related expenditures I said Tesla would make this year, about half is internal, primarily the Tesla-designed AI inference computer and sensors present in all of our cars, plus Dojo. For building the AI training superclusters, NVidia hardware is about 2/3 of the cost. My current best guess for Nvidia purchases by Tesla are $3B to $4B this year.”
July 1 – Musk reveals on X that current Tesla vehicles may not have the right hardware for the company’s next-gen AI model. He says that the roughly 5x increase in parameter count with the next-gen AI “is very difficult to achieve without upgrading the vehicle inference computer.”
Nvidia supply challenges
July 23 – During Tesla’s second-quarter earnings call, Musk says demand for Nvidia hardware is “so high that it’s often difficult to get the GPUs.”
“I think this therefore requires that we put a lot more effort on Dojo in order to ensure that we’ve got the training capability that we need,” Musk says. “And we do see a path to being competitive with Nvidia with Dojo.”
A graph in Tesla’s investor deck predicts that Tesla AI training capacity will ramp to roughly 90,000 H100 equivalent GPUs by the end of 2024, up from around 40,000 in June. Later that day on X, Musk posts that Dojo 1 will have “roughly 8k H100-equivalent of training online by end of year.” He also posts photos of the supercomputer, which appears to use the same fridge-like stainless steel exterior as Tesla’s Cybertrucks.
From Dojo to Cortex
July 30 – AI5 is ~18 months away from high-volume production, Musk says in a reply to a post from someone claiming to start a club of “Tesla HW4/AI4 owners angry about getting left behind when AI5 comes out.”
August 3 – Musk posts on X that he did a walkthrough of “the Tesla supercompute cluster at Giga Texas (aka Cortex).” He notes that it would be made roughly of 100,000 H100/H200 Nvidia GPUs with “massive storage for video training of FSD & Optimus.”
August 26 – Musk posts on X a video of Cortex, which he refers to as “the giant new AI training supercluster being built at Tesla HQ in Austin to solve real-world AI.”
2025
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Sponsored
Popular Post
tesla Model 3 Owner Nearly Stung With $1,700 Bill For Windshield Crack After Delivery
34 ViewsDec 28 ,2024
Middle-Aged Dentist Bought a Tesla Cybertruck, Now He Gets All the Attention He Wanted
32 ViewsNov 23 ,2024