Human-like Dexterous Grasping Through Reinforcement Learning and Multimodal Perception

Dexterous robotic grasping with multifingered hands remains a critical challenge in non-visual environments, where diverse object geometries and material properties demand adaptive force modulation and tactile-aware manipulation. To address this, we propose the Reinforcement Learning-Based Multimoda...

Full description

Saved in:
Bibliographic Details
Main Authors: Wen Qi, Haoyu Fan, Cankun Zheng, Hang Su, Samer Alfayad
Format: Article
Language:English
Published: MDPI AG 2025-03-01
Series:Biomimetics
Subjects:
Online Access:https://www.mdpi.com/2313-7673/10/3/186
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Dexterous robotic grasping with multifingered hands remains a critical challenge in non-visual environments, where diverse object geometries and material properties demand adaptive force modulation and tactile-aware manipulation. To address this, we propose the Reinforcement Learning-Based Multimodal Perception (RLMP) framework, which integrates human-like grasping intuition through operator-worn gloves with tactile-guided reinforcement learning. The framework’s key innovation lies in its Tactile-Driven DCNN architecture—a lightweight convolutional network achieving 98.5% object recognition accuracy using spatiotemporal pressure patterns—coupled with an RL policy refinement mechanism that dynamically correlates finger kinematics with real-time tactile feedback. Experimental results demonstrate reliable grasping performance across deformable and rigid objects while maintaining force precision critical for fragile targets. By bridging human teleoperation with autonomous tactile adaptation, RLMP eliminates dependency on visual input and predefined object models, establishing a new paradigm for robotic dexterity in occlusion-rich scenarios.
ISSN:2313-7673