Starcloud Achieves First Orbital AI Model Training with Nvidia H100 GPU

Starcloud Achieves First Orbital AI Model Training with Nvidia H100 GPU

Starcloud Achieves First Orbital AI Model Training with Nvidia H100 GPU

Starcloud, a Washington-based startup formerly known as Lumen Orbit, announced on December 10, 2025, the successful training and inference of AI models aboard its Starcloud-1 satellite, launched in early November 2025. The satellite, equipped with an Nvidia H100 GPU—reportedly 100 times more powerful than any prior space-based compute—ran Google's open-source Gemma large language model for inference and trained Andrej Karpathy's NanoGPT on Shakespeare's complete works, generating Shakespearean-style responses. This marks the first instance of a high-performance GPU executing LLM training and inference in orbit, demonstrating viability for space-based data centers powered by continuous solar energy.

Starcloud's Mission and Track Record

Founded in 2024, Starcloud's mission is to build orbital data centers that leverage falling launch costs, uninterrupted solar power (up to 5x more efficient than terrestrial due to no atmospheric interference), and passive radiative cooling to scale AI compute to gigawatt levels without Earth's energy, water, or land constraints. The company aims to address AI's escalating demands, projected to double global data center electricity use by 2030, while reducing emissions and costs by up to 10x. Track record includes Y Combinator graduation (S24 cohort), participation in Nvidia's Inception program and Google for Startups Cloud AI Accelerator, and raising approximately $21 million in seed funding. Starcloud-1 represents its first demonstrator mission, with plans for Starcloud-2 (GPU clusters and storage) in 2026 and a 5 GW orbital cluster by the early 2030s.

People Behind Starcloud

The founding team includes CEO Philip Johnston (second-time founder with McKinsey satellite consulting experience, Harvard MPA, Wharton MBA); CTO Ezra Feilden (PhD in astrophysics, expertise in deployable structures from Airbus Defense & Space/SSTL and Oxford Space Systems, including NASA's Lunar Pathfinder); and Chief Engineer Adi Oltean (former SpaceX Starlink principal engineer for inter-satellite beams, 20+ years at Microsoft on GPU clusters with 25+ patents). The ~12-person team draws from aerospace, cloud, and ML infrastructure backgrounds.

Running GPUs in Space: Technical Solutions

Starcloud-1, a ~60 kg microsatellite on a Corvus-Micro platform, integrates an unmodified Nvidia H100 GPU with custom systems. Power comes from deployable solar arrays providing constant energy in sun-synchronous orbit.

For radiation: High-energy particles in low Earth orbit risk bit flips and latch-ups; Starcloud uses software-defined radiation tolerance—redundant computations, error-correcting codes, and AI-optimized fault detection—rather than heavy shielding or rad-hardened chips, accepting potential shorter lifespans (~5 years for Nvidia hardware).

For heat dissipation: Vacuum eliminates convection; heat is conducted from the GPU die via solid-state interfaces to large deployable radiative panels (high-emissivity >0.9, optimized for infrared) that emit waste heat into deep space (~2.7-4 K background). Panels orient away from sunlight/Earth for efficiency, potentially scaling to multi-square-kilometer arrays in future clusters.

These innovations enabled stable H100 operation, with real-time telemetry queries and plans for expanded testing.

References

  1. CNBC: "Nvidia-backed Starcloud trains first AI model in space, orbital data centers" – https://www.cnbc.com/2025/12/10/nvidia-backed-starcloud-trains-first-ai-model-in-space-orbital-data-centers.html
  2. Starcloud Official Website – https://www.starcloud.com/
  3. Nvidia Blog: "How Starcloud Is Bringing Data Centers to Outer Space" – https://blogs.nvidia.com/blog/starcloud/
  4. Why we should train AI in space - Whitepaper - https://starcloudinc.github.io/wp.pdf
Starcloud SpaceTech AI nVidia