Skip to main content ->
Ai2

Ai2 Introduces Open, Simulation-First Stack for Physical AI, Achieving Zero-Shot Transfer to Real Robots

March 11, 2026

Ai2


Today’s AI systems can write essays, generate images, and answer complex questions, but intelligence in the physical world is fundamentally harder. Robots must reliably perceive, grasp, and manipulate objects in messy, real-world environments. That means understanding 3D structure, object affordances, spatial constraints, and how to avoid collisions — all while adapting their behavior as conditions change. That’s why progress in robotics isn’t just about building better machines. It also strengthens our understanding of intelligence itself by revealing whether AI systems can generalize, and operate reliably in the real world. 

Today, Ai2 announced a breakthrough: robotics models trained entirely in simulation that transfer directly to real-world robots with no additional manually-collected data or fine-tuning (a milestone known as zero-shot sim-to-real transfer). For years, simulation has been used as a starting point in robotics. But researchers still needed months of teleoperated real-world demonstrations to make systems reliable. Demonstrating zero-shot transfer without that step challenges one of the field’s core assumptions and removes a major bottleneck to progress.

Alongside this result, we’re releasing the open infrastructure that made it possible, including the large-scale simulation ecosystem and models that power the system. Together, they demonstrate that with sufficient diversity across scenes, objects, lighting, physics, and task definitions, zero-shot transfer from simulation alone is not just possible, but practical.

“Our mission is to build AI that advances science and expands what humanity can discover,” said Ali Farhadi, CEO of Ai2. “Robotics can become a foundational scientific instrument, helping researchers move faster and explore new questions. To get there, we need systems that generalize in the real world and tools the global research community can build on together. Demonstrating transfer from simulation to reality is a meaningful step in that direction.”

Introducing MolmoSpaces and MolmoBot

With MolmoSpaces and MolmoBot, Ai2 puts that breakthrough into practice—marking a shift in how physical AI systems can be built. If simulation alone can produce real-world capability, then progress in robotics no longer depends on collecting proprietary datasets at scale. It depends on designing richer virtual worlds. 

MolmoSpaces: Large Scale Simulation for Embodied Learning

MolmoSpaces, an open ecosystem for embodied AI research, brings together more than 230,000 indoor scenes, more than 130,000 curated object assets, and over 42 million physics-grounded robotic grasp annotations in a single unified platform. Researchers can vary object properties, layouts, lighting, articulation, and task definitions in a controlled and systematic way.

Historically, simulation has been used as a pretraining tool. Real-world data was still required to make systems generalize. MolmoSpaces was built to test whether sufficient diversity in simulation alone could produce robust real-world behavior. By releasing the assets, tools, and infrastructure openly, Ai2 is making robotics research more explainable and accessible, turning simulation into shared scientific infrastructure.

MolmoBot: Zero-Shot Transfer from Simulation

Built on MolmoSpaces, MolmoBot is a fully open manipulation model suite trained entirely on synthetic data. MolmoBot demonstrates what the zero-shot approach makes possible. Across two different robot systems – including a mobile manipulator – MolmoBot performs pick and place, articulated object manipulation such as opening drawers and cabinets, and door opening tasks on unseen objects and in new environments.

It achieves this without real-world demonstration data, photorealistic rendering, or task-specific adaptation. Ai2’s evaluations show that diversity across environments and assets matters more than simply repeating the same scenario at scale. Breadth in simulation leads to stronger generalization in reality.

This finding reshapes how robots can be trained. Instead of relying on months of cumbersome manual data collection, researchers can focus on designing richer simulated environments that scale with compute and a shared foundation. 

“Most approaches try to close the sim-to-real gap by adding more real-world data,” said Ranjay Krishna, Director of the PRIOR team at Ai2. “We took the opposite bet: that the gap shrinks when you dramatically expand the diversity of simulated environments, objects, and camera conditions. Our latest advancement shifts the constraint in robotics from collecting manual demonstrations to designing better virtual worlds, and that’s a problem we can solve.”

If simulation becomes the primary training ground for physical AI, robotics research becomes faster, more reproducible, and more accessible. That shift matters not just for robotics but for science, enabling more labs and companies to build capable physical AI systems.

“For AI to truly advance science, progress cannot depend on closed data or isolated systems,” continues Ali Farhadi, CEO of Ai2. “It requires shared infrastructure that researchers everywhere can build on, test, and improve together. This is how we believe  physical AI will move forward.”

MolmoSpaces and MolmoBot are fully open, including models, simulation infrastructure, grasp annotations, data generation pipelines, and benchmarking tools. MolmoSpaces is also designed to work across widely used simulators — including MuJoCo, NVIDIA’s open learning and simulation frameworks NVIDIA Isaac Lab and NVIDIA Isaac Sim — so that advances in simulation-trained robotics can integrate into broader research ecosystems.

Continued progress in physical AI will require collaboration across models, simulation systems, and beyond. Ai2 is committed to ensuring that this progress remains open, transparent, and grounded in scientific rigor.

Dive into our technical blog for more

We’re excited to continue the conversation at NVIDIA GTC next week in San Jose (March 16–19, 2026). See you there!

Subscribe to receive monthly updates about the latest Ai2 news.