Nvidia released a new stack of robot foundation models, simulation tools, and edge hardware at CES 2026, moves that signal the company’s ambition to become the default platform for generalist robotics, much as Android became the operating system for smartphones. 

Nvidia’s move into robotics reflects a broader industry shift as AI moves off the cloud and into machines that can learn how to think in the physical world, enabled by cheaper sensors, advanced simulation, and AI models that increasingly can generalize across tasks. 

Nvidia revealed details on Monday about its full-stack ecosystem for physical AI, including new open foundation models that allow robots to reason, plan, and adapt across many tasks and diverse environments, moving beyond narrow task-specific bots, all of which are available on Hugging Face. 

Those models include: Cosmos Transfer 2.5 and Cosmos Predict 2.5, two world models for synthetic data generation and robot policy evaluation in simulation; Cosmos Reason 2, a reasoning vision language model (VLM) that allows AI systems to see, understand, and act in the physical world; and Isaac GR00T N1.6, its next-gen vision language action (VLA) model purpose-built for humanoid robots. GR00T relies on Cosmos Reason as its brain, and it unlocks whole-body control for humanoids so they can move and handle objects simultaneously. 

Nvidia also introduced Isaac Lab-Arena at CES, an open source simulation framework hosted on GitHub that serves as another component of the company’s physical AI platform, enabling safe virtual testing of robotic capabilities.

The platform promises to address a critical industry challenge: As robots learn increasingly complex tasks, from precise object handling to cable installation, validating these abilities in physical environments can be costly, slow, and risky. Isaac Lab-Arena tackles this by consolidating resources, task scenarios, training tools, and established benchmarks like Libero, RoboCasa, and RoboTwin, creating a unified standard where the industry previously lacked one.

Supporting the ecosystem is Nvidia OSMO, an open source command center that serves as connective infrastructure that integrates the entire workflow from data generation through training across both desktop and cloud environments. 

Techcrunch event

San Francisco
|
October 13-15, 2026

And to help power it all, there’s the new Blackwell-powered Jetson T4000 graphics card, the newest member of the Thor family. Nvidia is pitching it as a cost-effective on-device compute upgrade that delivers 1200 teraflops of AI compute and 64 gigabytes of memory while running efficiently at 40 to 70 watts. 

Nvidia is also deepening its partnership with Hugging Face to let more people experiment with robot training without needing expensive hardware or specialized knowledge. The collaboration integrates Nvidia’s Isaac and GR00T technologies into Hugging Face’s LeRobot framework, connecting Nvidia’s 2 million robotics developers with Hugging Face’s 13 million AI builders. The developer platform’s open source Reachy 2 humanoid now works directly with Nvidia’s Jetson Thor chip, letting developers experiment with different AI models without being locked into proprietary systems.  

The bigger picture here is that Nvidia is trying to make robotics development more accessible, and it wants to be the underlying hardware and software vendor powering it, much like Android is the default for smartphone makers.

There are early signs that Nvidia’s strategy is working. Robotics is the fastest growing category on Hugging Face, with Nvidia’s models leading downloads. Meanwhile robotics companies, from Boston Dynamics and Caterpillar to Franka Robots and NEURA Robotics, are already using Nvidia’s tech.

Follow along with all of TechCrunch’s coverage of the annual CES conference here.



Source link

Share.
Leave A Reply

Exit mobile version