Robust Visuomotor Control for Humanoid Loco-Manipulation Using Hybrid Reinforcement Learning

Loco-manipulation tasks using humanoid robots have great practical value in various scenarios. While reinforcement learning (RL) has become a powerful tool for versatile and robust whole-body humanoid control, visuomotor control in loco-manipulation tasks with RL remains a great challenge due to the...

Full description

Saved in:
Bibliographic Details
Main Authors: Chenzheng Wang, Qiang Huang, Xuechao Chen, Zeyu Zhang, Jing Shi
Format: Article
Language:English
Published: MDPI AG 2025-07-01
Series:Biomimetics
Subjects:
Online Access:https://www.mdpi.com/2313-7673/10/7/469
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Loco-manipulation tasks using humanoid robots have great practical value in various scenarios. While reinforcement learning (RL) has become a powerful tool for versatile and robust whole-body humanoid control, visuomotor control in loco-manipulation tasks with RL remains a great challenge due to their high dimensionality and long-horizon exploration issues. In this paper, we propose a loco-manipulation control framework for humanoid robots that utilizes model-free RL upon model-based control in the robot’s tasks space. It implements a visuomotor policy with depth-image input, and uses mid-way initialization and prioritized experience sampling to accelerate policy convergence. The proposed method is validated on typical loco-manipulation tasks of load carrying and door opening resulting in an overall success rate of 83%, where our framework automatically adjusts the robot motion in reaction to changes in the environment.
ISSN:2313-7673