Nate Jones Gets AI Data Centers Wrong
- by NextBigFuture
- Dec 13, 2025
- 0 Comments
- 0 Likes Flag 0 Of 5
Brian Wang
Nate Jones got the AI data centers in space story wrong. I like Nate Jones, he does a lot of useful work gathering AI news and staying on top of developments.
He is wrong in this case. Starcloud already has an H100 in a satellite in space. It launched last month. Working with Nvidia and partnered with Crusoe. Commercial version launch target October 2026. SpaceX is making satellite modifications now and I expect will go through FCC filings. I expected modified V2 mini and modified V3 mini launching and will be called enhancements of communication for caching and AI nodes for faster expedited FCC approval. This will be commercial after short testing periods. Google has performed proton beam radiation tests on the ground to confirm TPU, HBM and other components work. Experimental evidence shows error rate manageable for AI inference. Google targets 2027 launch or first two test satellites in partnership with Planet.
What he Said About Orbital AI Compute
Story 6: Elon Musk Proposes Orbital AI Compute
He says the Idea is to place data centers in space for superior heat venting, power via solar, beam results via lasers.
This is wrong. The primary benefit is that building and powering AI chips and data centers in space will be much faster for large scale like millions of chips and the only solution to power chips in practical time frames for billions of chips or more. The cooling is a side show. Answers and questions to and from the satellites will be mainly via standard KU band or S/L/AWS and other regular radiofrequency communications. This is like the other Starlink communication. There is intersatellite laser communication. The AI chip satellites could laser communicate to the Starlink satellites to avoid redundant antennas and systems for direct communication.
Nate says that there is currently nothing in orbit. A debate-stage proposal.
This is wrong. As seen above Starcloud and Nvidia launched an Nvidia H100 into space on a SpaceX Falcon 9 rideshare over a month ago.
I reported on the 19 page Google Project Suncatcher report. Suncatcher is a new research moonshot to one day scale machine learning in space. Working backward from this potential future, they are exploring how an interconnected network of solar-powered satellites, equipped with our Tensor Processing Unit (TPU) AI chips, could harness the full power of the Sun. The next step is a learning mission in partnership with Planet to launch two prototype satellites by early 2027 that will test our hardware in orbit, laying the groundwork for a future era of massively-scaled computation in space.
Google proposed approach would instead rely on arrays of smaller satellites. This more modular design would provide ample opportunity to scale to the terawatts of compute capacity that could fit within the dawn-dusk sun-synchronous low-earth orbital band.
The required inter-satellite communication bandwidth, the dynamics and control of large, tightly-clustered satellite formations, the radiation tolerance of TPUs, and economic feasibility given expected future launch costs. Other significant challenges such as on-orbit reliability and repair, high-bandwidth ground communications, and thermal management are also discussed in the Google paper.
Inter-satellite communication bandwidth is not a show stopping problem. Various radiation tolerance and management approaches are far more advanced and more proven than is commonly known. Launch costs depend upon SpaceX Superheavy Starship achieving full rapid reusability.
Google V6e Trillium Cloud TPU with its associated AMD host server were tested in a 67 MeV proton beam to simulate the operating conditions of sun-synchronous LEO. This work presents the first published radiation-testing results for such a device. For the target sun-synchronous LEO with significant shielding (10 mm Aluminum equivalent) the radiation environment is primarily composed of penetrating protons and Galactic Cosmic Rays (GCRs). This results in an estimated dose of ∼150 rad(Si)/year.
The High Bandwidth Memory (HBM) subsystems are the most sensitivity to TID (space radiation). HBM problems are mainly uncorrectable ECC errors (UECCs) of one event per 50 rad. This can be shielded to go to 1 per 10 million inferences. This error rate is likely acceptable for inference but would be a problem for AI training jobs.
Nate Prediction: Someone will attempt space data center in 1–2 years.
Starcloud and Nvidia have already launched one Nvidia H100 chip. All Starlink satellites have AMD Versal chips with about 5% of an H100 compute. In 2026, Starcloud will launch a larger satellite Starcloud -2. They are targeting October 2026 launch. Starcloud-12 will feature multiple Nvidia H100 chips and integrate Nvidia’s next-generation Blackwell platform (include B200 or B300 chips). Crusoe has partnered with Starcloud to deploy the first public cloud platform in space, leveraging solar energy and the vacuum of space for cooling.
Crusoe will deploy its Cloud platform on Starcloud-2 satellite scheduled for launch in late 2026, with limited GPU capacity expected to be available from orbit by early 2027. Crusoe recently announced a successful Series E raise of $1.375 billion, bringing its valuation above $10 billion. They also secured a $750 million credit line to fund data center construction.
Nate’s take: Purely empirical — unknown if feasible. Trials will reveal viability for scaling LLMs.
Tests have shown it is feasible. Error rates are manageable based upon proton beam tests. Chips and memory are in space. Trial is happening. First commercially accessible data center will be launched late in 2026. I, Brian Wang, Nextbigfuture, predict that SpaceX will launch multiple operational data center satellites in 2026. There could be some modified V2 mini and modified V3 satellites later in the year when Starship is deploying. Those will be mainly to test out all of the remaining issues. It is already clear it will be viable for AI inference. SpaceX Starlink can use compute in space to enhance Starlink services with caching and other use cases.
SpaceX has publicly confirmed plans to scale Starlink V3 satellites for space-based data centers, leveraging their high-speed laser inter-satellite links, increased size (~1,500–1,900 kg), and increasing power capacity (up to ~150 kW solar input per satellite in some configurations). These would enable distributed AI processing in orbit, harnessing unlimited solar power and vacuum cooling. Likely, modified V2 Mini tests in early 2026, followed by V3-based deployments later in 2026 via Starship.
SpaceX must file with the FCC (or amend existing FCC filing) for changes to satellite design, power flux density, frequency use, or operations under their Gen2 license (up to 29,988 satellites authorized, with ~7,500 initially deployed). Adding compute payloads could alter RF emissions, thermal profiles, or mass—requiring updated orbital debris mitigation plans and EPFD (Equivalent Power Flux Density) demonstrations to avoid interference with geostationary satellites.
The process is a public notice with 30–60 day comment period. Oppositions from competitors like Viasat/Dish possible and then FCC review. SpaceX has a strong track record with multiple Gen2 modifications approved in 6–18 months. They got V-band additions, direct-to-cell.
There needs to be new FCC filings to allow deploying 10,000+ modified V3 AI satellites. I expect late 2026-2027 applications and this includes the later 1 million satellite applications. Approvals mid to late 2027 and 2028+ deployment.
Timeline for FCC Approvals. For minor mods (payload additions without spectrum changes), 6–12 months. Major changes (higher power affecting interference) could take 12–24 months. Expedited if framed as enhancing existing broadband services. Thus I think the use case of compute for AI services integrated with Starlink and caching of internet will the first significant SpaceX use of modified compute in space. SpaceX may also integrate one to a handful of chips into all Starlink v2 mini satellites. This could be done in as little as 3 months. They would have to keep the RF and other specifications in bounds.
Brian Wang
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
Categories
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Energy




