Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
NVIDIA Robotics Chief: AI Agents Will Trigger a "ChatGPT Moment" in Robotics
NVIDIA is extending its bets in the AI agent field into the robotics sector, betting that this technology can solve the core challenges of large-scale robot deployment.
According to The Information, Deepu Talla, Vice President of Robotics and Edge AI at NVIDIA, said in an interview during the annual GTC conference in San Jose, California, that AI agent systems are being built with a “digital-first” approach, and robots are just a natural extension of this system. He predicts that the involvement of AI agents will be a major turning point for the robotics industry—similar to how ChatGPT impacted the AI industry—making robot deployment as simple as “hands-on and self-managed.”
This statement further clarifies NVIDIA’s strategic direction for the next phase of AI development. For investors, this means NVIDIA’s robotics narrative is expanding from hardware and simulation software to higher-level agent orchestration software, with potential market size and business models likely to grow further.
AI Agents: The “Air Traffic Control” for Robots
Talla describes two core values of AI agents in robotics scenarios. The first is at the coding layer: agents can be used to build the “brain” of robots, automatically generate training data, and evaluate AI models for robots. NVIDIA announced this week that coding intelligent agents like Claude Code, OpenAI’s Codex, and Cursor can now call their Osmo software to automate these functions.
The second is at the orchestration layer: in multi-robot scenarios such as factories or warehouses, a single agent can act as “air traffic control,” breaking down overall goals into specific tasks, assigning them to humanoid robots, industrial arms, and other robot types, while ensuring no collisions occur between robots or with human workers. Talla notes that this orchestration function will run on cloud or local servers, continuously simulating different strategies and issuing execution plans.
This approach is not unique to NVIDIA. Reports indicate that Amazon released DeepFleet last year—its self-developed warehouse robot coordination AI model, which is expected to improve robot operation efficiency by 10%.
Market Logic Behind the ChatGPT Analogy
Talla attributes ChatGPT’s success to two factors: first, its versatility—handling various tasks without specialized training; second, its extremely low barrier to use—anyone can get started without prior learning. He believes the robotics industry needs to achieve breakthroughs in both areas—having a general-purpose “brain” capable of reasoning and problem-solving, and making robot deployment simple enough for widespread adoption.
NVIDIA CEO Jensen Huang also stated at GTC that “within a few years, the idea of running OpenClaw inside robots is quite obvious,” referring to this popular open-source agent. At this conference, open-source agents (including NVIDIA’s own NemoClaw) and robots became the two most discussed topics.
It’s worth noting that Talla admits that agent orchestration cannot solve all challenges faced by robots—there are still significant limitations in manipulating small or soft objects and in safely operating around humans.
Cosmos World Model: Mixed Progress, Still Maturing
Regarding the world models that robot training depends on, Talla provided a cautious assessment of NVIDIA’s Cosmos model. He said Cosmos was released in January 2025, with updates every two to three months. As the versions improve in quality, the number of adopters continues to grow, but some companies prefer to wait for the next version in three to six months.
Talla pointed out that Cosmos is a collection of multiple models covering reasoning, prediction, and 3D data generation, with varying levels of maturity. Whether it can meet specific application needs depends on the use case.
In terms of computational resource consumption, he noted that currently, robot companies mainly focus on model training, because a general robot “brain” does not yet exist, and the main bottleneck limiting its development is data scarcity. He predicts that as large-scale deployment of robots occurs, simulation computing demands will grow in a “hockey stick” pattern, but “we are still far from deploying robots in swarms.” This judgment provides important insight into NVIDIA’s GPU demand trajectory in the robotics field in the medium term.
Risk Warning and Disclaimer
Market risks exist; investments should be cautious. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions in this article are suitable for their particular circumstances. Investment carries risks, and responsibility rests with the individual.