A Powered Prosthetic Hand With Vision System for Enhancing the Anthropopathic Grasp
The anthropomorphic grasping capability of prosthetic hands is critical for enhancing user experience and functional efficiency. Existing prosthetic hands relying on brain-computer interfaces (BCI) and electromyography (EMG) face limitations in achieving natural grasping due to insufficient gesture...
Saved in:
| Main Authors: | , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Transactions on Neural Systems and Rehabilitation Engineering |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10988884/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850189619032752128 |
|---|---|
| author | Yansong Xu Xiaohui Wang Junlin Li Xiaoqian Zhang Feng Li Qing Gao Chenglong Fu Yuquan Leng |
| author_facet | Yansong Xu Xiaohui Wang Junlin Li Xiaoqian Zhang Feng Li Qing Gao Chenglong Fu Yuquan Leng |
| author_sort | Yansong Xu |
| collection | DOAJ |
| description | The anthropomorphic grasping capability of prosthetic hands is critical for enhancing user experience and functional efficiency. Existing prosthetic hands relying on brain-computer interfaces (BCI) and electromyography (EMG) face limitations in achieving natural grasping due to insufficient gesture adaptability and intent recognition. While vision systems enhance object perception, they lack dynamic human-like gesture control during grasping. To address these challenges, we propose a vision-powered prosthetic hand system that integrates two innovations. Spatial Geometry-based Gesture Mapping (SG-GM) dynamically models finger joint angles as polynomial functions of hand-object distance, derived from geometric features of human grasping sequences. These functions enable continuous anthropomorphic gesture transitions, mimicking natural hand movements. Motion Trajectory Regression-based Grasping Intent Estimation (MTR-GIE) predicts user intent in multi-object environments by regressing wrist trajectories and spatially segmenting candidate objects. Experiments with eight daily objects demonstrated high anthropomorphism (similarity coefficient <inline-formula> <tex-math notation="LaTeX">${R}^{{2}}=0.911$ </tex-math></inline-formula>, root mean squared error <inline-formula> <tex-math notation="LaTeX">$\textit {RMSE}=2.47 {^{\circ}}$ </tex-math></inline-formula>), rapid execution (<inline-formula> <tex-math notation="LaTeX">$3.07\pm 0.41$ </tex-math></inline-formula> s), and robust success rates (95.43% single-object; 88.75% multi-object). The MTR-GIE achieved 94.35% intent estimation accuracy under varying object spacing. This work pioneers vision-driven dynamic gesture synthesis for prosthetics, eliminating dependency on invasive sensors and advancing real-world usability. |
| format | Article |
| id | doaj-art-fef86b7718274230ba8b282dcbc69dfd |
| institution | OA Journals |
| issn | 1534-4320 1558-0210 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Transactions on Neural Systems and Rehabilitation Engineering |
| spelling | doaj-art-fef86b7718274230ba8b282dcbc69dfd2025-08-20T02:15:34ZengIEEEIEEE Transactions on Neural Systems and Rehabilitation Engineering1534-43201558-02102025-01-01331827184010.1109/TNSRE.2025.356739210988884A Powered Prosthetic Hand With Vision System for Enhancing the Anthropopathic GraspYansong Xu0https://orcid.org/0000-0003-2106-9827Xiaohui Wang1https://orcid.org/0009-0005-9055-4799Junlin Li2https://orcid.org/0000-0003-0456-3778Xiaoqian Zhang3https://orcid.org/0009-0003-8026-3141Feng Li4https://orcid.org/0000-0002-9677-0207Qing Gao5https://orcid.org/0000-0002-5395-6175Chenglong Fu6https://orcid.org/0000-0002-8955-5429Yuquan Leng7https://orcid.org/0000-0003-4063-4545State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, ChinaState Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, ChinaState Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, ChinaState Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, ChinaState Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, ChinaSchool of Electronics and Communication Engineering, Sun Yat-sen University, Shenzhen, ChinaDepartment of Mechanical and Energy Engineering, Southern University of Science and Technology, Shenzhen, ChinaSchool of Biomedical Engineering and the State Key Laboratory of Robotics and Systems, Harbin Institute of Technology at Shenzhen, Shenzhen, ChinaThe anthropomorphic grasping capability of prosthetic hands is critical for enhancing user experience and functional efficiency. Existing prosthetic hands relying on brain-computer interfaces (BCI) and electromyography (EMG) face limitations in achieving natural grasping due to insufficient gesture adaptability and intent recognition. While vision systems enhance object perception, they lack dynamic human-like gesture control during grasping. To address these challenges, we propose a vision-powered prosthetic hand system that integrates two innovations. Spatial Geometry-based Gesture Mapping (SG-GM) dynamically models finger joint angles as polynomial functions of hand-object distance, derived from geometric features of human grasping sequences. These functions enable continuous anthropomorphic gesture transitions, mimicking natural hand movements. Motion Trajectory Regression-based Grasping Intent Estimation (MTR-GIE) predicts user intent in multi-object environments by regressing wrist trajectories and spatially segmenting candidate objects. Experiments with eight daily objects demonstrated high anthropomorphism (similarity coefficient <inline-formula> <tex-math notation="LaTeX">${R}^{{2}}=0.911$ </tex-math></inline-formula>, root mean squared error <inline-formula> <tex-math notation="LaTeX">$\textit {RMSE}=2.47 {^{\circ}}$ </tex-math></inline-formula>), rapid execution (<inline-formula> <tex-math notation="LaTeX">$3.07\pm 0.41$ </tex-math></inline-formula> s), and robust success rates (95.43% single-object; 88.75% multi-object). The MTR-GIE achieved 94.35% intent estimation accuracy under varying object spacing. This work pioneers vision-driven dynamic gesture synthesis for prosthetics, eliminating dependency on invasive sensors and advancing real-world usability.https://ieeexplore.ieee.org/document/10988884/Computer visiongesture modelingfull-automatic controlprosthetic hand system |
| spellingShingle | Yansong Xu Xiaohui Wang Junlin Li Xiaoqian Zhang Feng Li Qing Gao Chenglong Fu Yuquan Leng A Powered Prosthetic Hand With Vision System for Enhancing the Anthropopathic Grasp IEEE Transactions on Neural Systems and Rehabilitation Engineering Computer vision gesture modeling full-automatic control prosthetic hand system |
| title | A Powered Prosthetic Hand With Vision System for Enhancing the Anthropopathic Grasp |
| title_full | A Powered Prosthetic Hand With Vision System for Enhancing the Anthropopathic Grasp |
| title_fullStr | A Powered Prosthetic Hand With Vision System for Enhancing the Anthropopathic Grasp |
| title_full_unstemmed | A Powered Prosthetic Hand With Vision System for Enhancing the Anthropopathic Grasp |
| title_short | A Powered Prosthetic Hand With Vision System for Enhancing the Anthropopathic Grasp |
| title_sort | powered prosthetic hand with vision system for enhancing the anthropopathic grasp |
| topic | Computer vision gesture modeling full-automatic control prosthetic hand system |
| url | https://ieeexplore.ieee.org/document/10988884/ |
| work_keys_str_mv | AT yansongxu apoweredprosthetichandwithvisionsystemforenhancingtheanthropopathicgrasp AT xiaohuiwang apoweredprosthetichandwithvisionsystemforenhancingtheanthropopathicgrasp AT junlinli apoweredprosthetichandwithvisionsystemforenhancingtheanthropopathicgrasp AT xiaoqianzhang apoweredprosthetichandwithvisionsystemforenhancingtheanthropopathicgrasp AT fengli apoweredprosthetichandwithvisionsystemforenhancingtheanthropopathicgrasp AT qinggao apoweredprosthetichandwithvisionsystemforenhancingtheanthropopathicgrasp AT chenglongfu apoweredprosthetichandwithvisionsystemforenhancingtheanthropopathicgrasp AT yuquanleng apoweredprosthetichandwithvisionsystemforenhancingtheanthropopathicgrasp AT yansongxu poweredprosthetichandwithvisionsystemforenhancingtheanthropopathicgrasp AT xiaohuiwang poweredprosthetichandwithvisionsystemforenhancingtheanthropopathicgrasp AT junlinli poweredprosthetichandwithvisionsystemforenhancingtheanthropopathicgrasp AT xiaoqianzhang poweredprosthetichandwithvisionsystemforenhancingtheanthropopathicgrasp AT fengli poweredprosthetichandwithvisionsystemforenhancingtheanthropopathicgrasp AT qinggao poweredprosthetichandwithvisionsystemforenhancingtheanthropopathicgrasp AT chenglongfu poweredprosthetichandwithvisionsystemforenhancingtheanthropopathicgrasp AT yuquanleng poweredprosthetichandwithvisionsystemforenhancingtheanthropopathicgrasp |