Refining whole body control to bring the ‘physical’ to physical AI

by Veda Ellis-Ong | Dec 12, 2025

As part of our journey into physical AI, achieving natural, human-like motion in humanoid and other intelligent robots will be critical. Our environments are designed for the human body, so to truly integrate into these spaces, robots must move through them with the same ease and fluidity that we do. This capability will unlock an entirely new class of applications for physical AI, enabling intelligent robots to collaborate with us in warehouses, factories, hospitals and even our homes.

But actions that are effortless to us are extraordinarily complex for a robot. Each requires the precise coordination of many joints, all working in coordination and in response to the surrounding environment. Achieving this seamless, context-aware movement is made possible by a fundamental concept in robotics: whole body control.

What is physical AI and whole body control?

Physical AI is embodied AI capable of both understanding and navigating the physical world, so naturally, the ability to move intuitively through the world will be key.

At its core, whole-body control refers to the coordinated control of all a robot’s joints at once, in a physically feasible and dynamically balanced way. Rather than sending separate commands to each joint independently, a whole-body controller generates actions for the entire system simultaneously.

These actions take into account the robot’s complete state – joint angles, velocities, body position and contact with the environment – to produce movement that looks and feels natural, or indeed, human.

This holistic approach allows robots to perform global motion objectives, reproducing complex human movements from motion capture data to execute dynamic skills such as flips and kicks. Because the entire body works together, the robot can use balance, momentum and coordination to move efficiently and expressively, just as we do.

Cracking this capability would have far-reaching effects across a huge range of industries. In manufacturing, retail and logistic operations, intelligent, human-like robots could work alongside humans, fitting into existing factory and warehouse layouts without major changes and with far more flexibility than fixed robotic arms. Working in healthcare or assisted care, they could provide valuable assistance for the elderly or disabled, such as helping them out of bed or assisting with household chores. In construction, they could climb ladders or scaffolds, while in hazardous environments, such as mining, they could navigate unpredictable terrain to carry out dangerous inspections with ease. The list goes on.

But as we’re finding, achieving this mobility is an immense challenge. In humans, every limb affects the others. When you reach for something, your legs shift to maintain balance, your torso rotates and your shoulders compensate. Independent joint control in current robotics fails to capture these subtleties, often resulting in stiff, unstable and unnatural motion. Our whole-body control development captures these interdependencies, enabling robots to behave as cohesive balanced systems.

Overcoming the challenges of natural movement

Developing whole-body control spans motion retargeting, policy training and real-world deployment. Each present distinct challenges – and opportunities for innovation.

1. Training robots to move like us

Dynamic, full-body motion requires accurate joint positioning over time, where errors become amplified as the motion sequence evolves. Our approach combines Imitation Learning (IL) and Reinforcement Learning (RL): IL teaches human-like patterns, while RL improves robustness and fluidity. Together, they produce stable, human-like movement on short sequences.

Even so, difficulties remain. When training a robot to learn several different movements at once, like walking, turning and jumping, a single control system often struggles to remember them all. As it learns new motions, it forgets the older ones – a problem called ‘catastrophic forgetting’. This means that instead of mastering all motions together, performance drops across the board, prompting our exploration of methods that retain prior knowledge while enabling continual learning.

2. Constructing a flow of motion in robotics

Another difficulty is teaching the robot how to start and finish each motion smoothly. For example, if a motion sequence shows a robot kicking a ball, how does it get into the right pose before the kick? And what does it do right after the kick – stand still, or prepare for another action? To manage this, we extended each motion with a standardised start and end pose so the robot always begins and ends in a stable position.

But while this solved one problem, it introduced a new one: if every motion begins from the same pose, the robot can’t easily decide which motion to perform when it starts moving. During deployment, it chooses a motion based on its current joint positions and velocities, not based on a named goal (like ‘kick’ or ‘wave’). Since all motions begin from nearly identical positions, the system can’t distinguish between them, so it might pick the wrong one. Finding a robust way to handle this motion selection is still an open challenge that we’re exploring.

3. From simulation to reality

Bridging simulation and reality is another of robotics’ toughest challenges. In simulation, the robot trains with perfect information; in the real world, sensor data is noisy and delayed. To handle this, we use an asymmetric control system. During training, the policy is learnt from full simulator data, while in deployment it only accesses real sensor input.

The current control policy relies on starting in a pre-defined orientation to execute the motion. To improve deployment robustness, we’re training the robot to execute the motion independent of its orientation such that motions remain stable from any direction. My colleagues will soon have an article focused on our efforts in Sim2Real which explores our progress in more depth.

Bringing whole body control to life

The vision we’ve laid out here is seemingly simple: to create AI-driven humanoid and other intelligent robots capable of moving through the world with the same confidence, balance and grace as a human being. A robot that can walk, climb, lift and interact safely alongside people – not just in labs, but in warehouses, hospitals, homes and beyond.

But as I’ve laid out, this seemingly simple idea is incredibly complex to put into action. Even so, we’re already seeing exciting progress in making this vision a reality as we explore the cutting-edge of whole body control. One of our most striking milestones has been stair-climbing – a task that, while effortless for humans, is notoriously difficult for robots. Successful climbing requires dynamic balance, step-height adaptation and controlled landing forces, meaning it’s not a skill that is widely available in robotics (yet). While our stair-climbing policy does not yet include perception, relying purely on learned control, it still marks a major leap forwards toward robust, dynamic locomotion.

Adding this stair climbing to our existing walking policy was an interesting challenge, and the results and observed performance were as intuitive as we would expect at this stage. While the robot will always trip over the first step due to its lack of perception, it then adjusts and begins to climb, with further successes seen in simulation. Even so, the real-world deployment gap was bigger for stairs than simple walking. Although it manages to climb, it is not graceful when it falls and recovery is more erratic than in simulation. Despite these challenges, even at this early stage the policy is robust to varying the stair height and we found that it was still relatively stable at climbing stairs. This creates a solid foundation for us to continue building on, leading us to continue improving the robustness of the policy around determining locations on the stairs for placing feet, effectively integrating a perception component to the policy.

We’ve also advanced our training pipeline with goal-conditioned reward learning, inspired by the original technique’s paper. This enables robots not just to imitate motion, but to adapt it, such as reaching precisely toward a moving object. While training such multi-objective behaviours remains challenging, this marks a crucial bridge between motion imitation and task-driven autonomy.

Supporting these breakthroughs is a comprehensive training and deployment pipeline of our design that forms a foundation for expressive, physically realistic and dynamically stable motion on real humanoid hardware.

The next steps towards achieving truly physical AI

In my view, our next steps will be focused on loco-manipulation and optimal path tracking. This will heavily involve whole body control with object interaction and fine manipulation, such as picking up something off the ground – a topic my colleague Geet Jose will be discussing in more depth. Similarly, our continued advances in human-robot interaction will be vital to form a foundation of understanding between robots and humans.

Every advancement we make in whole body control brings us closer to a world where robots don’t just function among us – they move with us, collaborate with us and extend the reach of what’s physically possible. Join us on our journey as we take physical AI from deep tech to real world impact – I for one am so excited to help define what comes next.

Expert authors

Senior AI Engineer | View profile

Veda is a Senior AI Engineer at CC. With experience in reinforcement learning, large language models, and computer vision, Veda specialises in developing and deploying advanced AI solutions across robotics, healthcare and automation.

Related insights

Deep tech

Are you curious about new technologies, and how they can lead to long-term sustainable value?

We think creatively at the intersection of business and technology, inventing solutions to redefine what you do.

Industries

You need a partner with intimate knowledge of your industry and proven experience of delivering value from deep tech breakthroughs.

Discover more about the work we do in your sector and how we can create real commercial advantage for you.

Insights

Take a look at the latest insights, ideas and perspectives from CC.

Explore a cross-section of up-to-date content on the deep tech trends shaping the future of business and society.

Careers

Are you looking for an opportunity for your abilities to be recognised, and make a real difference?

Whether you are just starting out or you’re an experienced professional, we would love to hear from you.