
Nvidia plans an orbital data‑centre platform
Nvidia unveils an orbital data‑centre platform and space-compute plans at GTC 2026.
Nvidia has unveiled a computing platform for orbital data centres, CEO Jensen Huang told the GTC 2026 conference.
“Computing in space is the final frontier, and it is already here. As satellite constellations are deployed and we venture deeper into the galaxy, intelligence must be where the data is generated,” the entrepreneur noted.
A press release from Nvidia said several companies will use the Vera Rubin Space-1 module, which includes IGX Thor and Jetson Orin, in space missions. The chips are purpose-built “for environments where size, weight and power are severely constrained”.
Huang stressed that the company is working with partners on a new computer for orbital data centres, though technical hurdles remain.
“There is no convection in space — only radiation. So we need to figure out how to cool such systems. Many engineers are working on this,” he said.
The build-out of data centres to feed surging AI demand has been linked to rising electricity prices. One proposed answer is to put compute in space — where there is limitless room and constant solar energy. But the high cost of launches remains a serious obstacle.
In February SpaceX filed a request with the US Federal Communications Commission to deploy a constellation of 1 million satellites for data centres.
The project envisages a network of low-Earth-orbit data centres linked by laser channels. The filing uses grand phrasing such as “the first step toward a Type II civilization on the Kardashev scale”.
In 2026 California startup Aetherflux plans to place solar mini-farms in low-Earth orbit to beam energy from space to Earth using lasers. It will use SpaceX rockets to deploy the technology.
In November 2025 Google said it aimed to build a system of near-Earth satellites to harvest solar power and feed data centres. That month researchers at 33FG calculated that by 2030 AI compute in orbit will be cheaper than on Earth.
A trillion
Huang said expected orders for Blackwell and Vera Rubin chips would reach $1trn by 2027.
Last year the company put potential revenue from the two generations at $500bn. But after reporting results last month, CFO Colette Kress noted that growth in 2026 could surpass earlier estimates.
Demand for Nvidia’s solutions is rising from both startups and large corporations, he said.
“Getting more compute lets you generate more tokens and increase revenues,” he said.
Autonomous machines
Nvidia is expanding partnerships in autonomous vehicles. The company announced new agreements with Hyundai Motor, Nissan Motor, Isuzu, BYD and Geely.
They concern the Drive Hyperion platform for vehicles. The system helps develop and integrate driver-assistance and autonomous-driving tools of Level 4.
“We have been working on self-driving cars for a long time. The ChatGPT moment for self-driving machines has already arrived,” Huang said.
There is no vehicle on the market that can drive entirely without human oversight yet. Some firms, such as Waymo, already offer taxi services with Level 4 cars.
Most current autopilots operate at Level 2 — the driver must constantly supervise the process.
Drive Hyperion spans model training in data centres, large-scale simulation and in-vehicle compute. Current customers include Aurora Innovation, Nuro, Sony Group, Uber, Stellantis and Lucid Group.
Other releases
At GTC 2026 Huang unveiled the Groq 3 Language Processing Unit (LPU) — the first chip from the Groq startup, whose assets Nvidia bought in December 2025 for $20bn. Shipments are expected in the third quarter.
He also announced the Groq 3 LPX server rack, comprising 256 LPUs. It is designed to operate alongside the Vera Rubin system, whose shipments are due later in 2026. According to Huang, the rack can boost Vera Rubin’s token-per-watt efficiency by 35 times.
“We combined two processors with completely different characteristics: one for high throughput, the other for low latency. That does not change the fact that we need a lot of memory. So we will simply add a large number of Groq chips, which will expand its available capacity,” Huang noted.
They additionally demonstrated a Kyber prototype — a next-generation server architecture. It will comprise 144 GPUs mounted vertically to increase compute density and cut costs.
Kyber will form part of the Vera Rubin Ultra system, with shipments expected in 2027.
Nvidia’s CEO also introduced a developer toolkit for building and testing new AI systems on the company’s hardware. He showed the NemoClaw stack, built specifically for OpenClaw.
In March, Huang rejected the thesis of AI as a “job killer”.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!