Robot Closed-Loop Grasping Based on Deep Visual Servoing Feature Network
Robot visual servoing for grasping has long been challenging to execute in complex visual environments because of issues with efficient feature extraction. This paper proposes a novel visual servoing grasping approach based on the Deep Visual Servoing Feature Network (DVSFN) to tackle this issue. Th...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Actuators |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-0825/14/1/25 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832589475791765504 |
---|---|
author | Junqi Luo Zhen Zhang Yuangan Wang Ruiyang Feng |
author_facet | Junqi Luo Zhen Zhang Yuangan Wang Ruiyang Feng |
author_sort | Junqi Luo |
collection | DOAJ |
description | Robot visual servoing for grasping has long been challenging to execute in complex visual environments because of issues with efficient feature extraction. This paper proposes a novel visual servoing grasping approach based on the Deep Visual Servoing Feature Network (DVSFN) to tackle this issue. The approach enables feasible to extract scale-invariant point features and target bounding boxes in real time by building an effective single-stage multi-dimensional feature extractor. The DVSFN is then integrated into a Levenberg–Marquardt–based image visual servoing (LM-IBVS) controller. The above creates a mapping link between the robot’s joint space and image features. The robot is then guided in positioning and grabbing by converting the difference between the expected and present features into the corresponding robot joint velocities. Experimental results demonstrate that the proposed method achieves a mean average precision (mAP) of 0.80 and 0.87 for detecting target bounding boxes and point features, respectively, in scenarios with significant lighting variations and occlusions. Under low-light and partial occlusion conditions, the method achieves an average grasping success rate approximately 80%. |
format | Article |
id | doaj-art-70e70ecc900d4ed8a44026654955feac |
institution | Kabale University |
issn | 2076-0825 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Actuators |
spelling | doaj-art-70e70ecc900d4ed8a44026654955feac2025-01-24T13:15:12ZengMDPI AGActuators2076-08252025-01-011412510.3390/act14010025Robot Closed-Loop Grasping Based on Deep Visual Servoing Feature NetworkJunqi Luo0Zhen Zhang1Yuangan Wang2Ruiyang Feng3School of Electric and Information Engineering, Beibu Gulf University, Qinzhou 535000, ChinaSchool of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin 541000, ChinaSchool of Electric and Information Engineering, Beibu Gulf University, Qinzhou 535000, ChinaSchool of Electric and Information Engineering, Beibu Gulf University, Qinzhou 535000, ChinaRobot visual servoing for grasping has long been challenging to execute in complex visual environments because of issues with efficient feature extraction. This paper proposes a novel visual servoing grasping approach based on the Deep Visual Servoing Feature Network (DVSFN) to tackle this issue. The approach enables feasible to extract scale-invariant point features and target bounding boxes in real time by building an effective single-stage multi-dimensional feature extractor. The DVSFN is then integrated into a Levenberg–Marquardt–based image visual servoing (LM-IBVS) controller. The above creates a mapping link between the robot’s joint space and image features. The robot is then guided in positioning and grabbing by converting the difference between the expected and present features into the corresponding robot joint velocities. Experimental results demonstrate that the proposed method achieves a mean average precision (mAP) of 0.80 and 0.87 for detecting target bounding boxes and point features, respectively, in scenarios with significant lighting variations and occlusions. Under low-light and partial occlusion conditions, the method achieves an average grasping success rate approximately 80%.https://www.mdpi.com/2076-0825/14/1/25robot graspingvisual servoingobject detectioncomplex visual environments |
spellingShingle | Junqi Luo Zhen Zhang Yuangan Wang Ruiyang Feng Robot Closed-Loop Grasping Based on Deep Visual Servoing Feature Network Actuators robot grasping visual servoing object detection complex visual environments |
title | Robot Closed-Loop Grasping Based on Deep Visual Servoing Feature Network |
title_full | Robot Closed-Loop Grasping Based on Deep Visual Servoing Feature Network |
title_fullStr | Robot Closed-Loop Grasping Based on Deep Visual Servoing Feature Network |
title_full_unstemmed | Robot Closed-Loop Grasping Based on Deep Visual Servoing Feature Network |
title_short | Robot Closed-Loop Grasping Based on Deep Visual Servoing Feature Network |
title_sort | robot closed loop grasping based on deep visual servoing feature network |
topic | robot grasping visual servoing object detection complex visual environments |
url | https://www.mdpi.com/2076-0825/14/1/25 |
work_keys_str_mv | AT junqiluo robotclosedloopgraspingbasedondeepvisualservoingfeaturenetwork AT zhenzhang robotclosedloopgraspingbasedondeepvisualservoingfeaturenetwork AT yuanganwang robotclosedloopgraspingbasedondeepvisualservoingfeaturenetwork AT ruiyangfeng robotclosedloopgraspingbasedondeepvisualservoingfeaturenetwork |