Physical AI Was Everywhere at CES '26. But What Happens Next?

January 2026
IoT & Emerging Technology

Physical AI was one of the standout themes at CES 2026, spanning everything from humanoid robots to autonomous driving. At its core, physical AI refers to artificial intelligence designed to perceive and interact with the real world by being embedded in hardware — most notably, robotics.

Source: Tony Crabtree

What makes physical AI so disruptive is the way it combines advanced AI frameworks with robotics hardware to autonomously carry out complex, real-world tasks. Because these systems can learn and operate in live environments, sectors such as manufacturing, logistics, healthcare, and retail stand to unlock new levels of automation and scalability. Robotics itself isn’t new, but the emergence of foundational models for physical AI is what’s pushing the market forward; enabling far greater autonomy across a much broader range of use cases.

Autonomous Vehicles Leading the Way

While robots may have grabbed the headlines at CES, the most immediate opportunity for physical AI is likely to sit within the automotive sector; particularly autonomous vehicles (AVs) and autonomous driving. That’s largely because manufacturers can begin to monetise these capabilities over time, helped along by rapid progress in AI performance and efficiency.

One of the most notable announcements came from NVIDIA, which unveiled Alpamayo: a family of open AI models, simulation tools, and datasets designed to accelerate the development of reasoning-based AVs. The platform introduces vision-language-action (VLA) models, allowing vehicles to interpret their surroundings, reason in real time, and respond more like a human driver across a wide range of scenarios.

Source: NVIDIA

By making Alpamayo open source, automotive OEMs can integrate it alongside their own systems and tailor it to new use cases: from advanced off-road driving to next-generation mobility services. Just as importantly, this open approach lowers the barrier to entry across the AV ecosystem, encouraging competition and, ultimately, faster innovation.

But Humanoid Robots Are Catching Up Fast

That said, physical AI isn’t only about vehicles. We expect some of the most significant long-term momentum to come from humanoid robotics, particularly as robots begin incorporating multimodal reasoning models such as NVIDIA's Isaac GR00T N1. These models support reinforcement learning and are trained in realistic environments, which makes them especially well suited to the diverse and unpredictable tasks humanoid robots are expected to perform.

The open-source nature of these foundational models is a major reason why we believe adoption will accelerate. Smaller manufacturers and new entrants can access the same underlying intelligence as larger players, speeding up experimentation and commercial deployment.

Source: Tony Crabtree

These models provide a strong platform for growth, but they’re only the first step. The humanoid robotics market - especially in manufacturing and training environments - is set to scale rapidly over the next few years. That growth will depend heavily on manufacturers’ ability to ramp up production, something companies like Tesla and Figure AI are already targeting, with both aiming to produce more than 10,000 units next year.

Importantly, these robots won’t be limited to enterprise settings. Many are expected to move into homes as well. This is where reinforcement learning really comes into its own, allowing robots to adapt and improve across a wide range of tasks over time; provided the hardware is capable.

A good example of this in practice came from AGIBOT, which showcased its lineup of humanoid robots at CES. The company claims to have already shipped more than 5,000 units, covering use cases ranging from hospitality reception and logistics sorting to security, manufacturing, and education. What stood out was the breadth of applications: a clear sign that general-purpose humanoid robots are already finding commercial demand.

Looking ahead to 2026, we expect AI models to continue improving accuracy for specific manufacturing tasks. However, the bigger challenge - and opportunity - will be extending these capabilities into more general consumer-focused roles. While much of the recent attention has been on training frameworks, AGIBOT’s progress shows that demand for flexible, general-purpose robots already exists.


Source: AGIBOT

Shifting from Hype to ROI

We expect governments and national bodies to introduce clearer guidance around the ethical deployment and use of AI, likely on a country-by-country basis. The priority will be end-user safety, but regulation will need to remain flexible enough to keep pace with how quickly AI is evolving.

What CES 2026 made clear is that physical AI and robotics have moved beyond pure experimentation and into early commercial deployment. That said, the robotics market has been here before - and suffered from early hype cycles. This time feels different. Broader, more capable AI frameworks significantly improve the value proposition for end users, while vendors across both hardware and AI will now need to focus on something much more grounded: proving real return on investment by onboarding and retaining customers.

Source: Tony Crabtree


Ardit works within the Telecoms & Connectivity team; providing insights and strategic recommendations on current and future markets within the telecoms industry. His primary area of focus is on operator and CSP strategies. He previously worked at GlobalData for four years where he covered the technology and telecommunications industries, and prior to that, worked at Gartner for two years.

Latest research, whitepapers & press releases