Interactive Prompt‐Guided Robotic Grasping for Arbitrary Objects Based on Promptable Segment Anything Model and Force‐Closure Analysis

Grasp generation methods based on force‐closure analysis can calculate the optimal grasps for objects through their appearances. However, the limited visual perception ability makes robots difficult to directly detect the complete appearance of objects. Building predefined models is also a costly pr...

Full description

Saved in:
Bibliographic Details
Main Authors: Yan Liu, Yaxin Liu, Ruiqing Han, Kai Zheng, Yufeng Yao, Ming Zhong
Format: Article
Language:English
Published: Wiley 2025-03-01
Series:Advanced Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1002/aisy.202400404
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Grasp generation methods based on force‐closure analysis can calculate the optimal grasps for objects through their appearances. However, the limited visual perception ability makes robots difficult to directly detect the complete appearance of objects. Building predefined models is also a costly procedure. These reasons constrain the application of force‐closure analysis in the real world. To solve it, this article proposes an interactive robotic grasping method based on promptable segment anything model and force‐closure analysis. A human operator can mark a prompt on any object using a laser pointer. Then, the robot extracts the edge of the marked object and calculates the optimal grasp through the edge. To validate feasibility and generalizability, the grasping generation method is tested on the Cornell and Jacquard datasets and a novel benchmark test set of 36 diverse objects is constructed to conduct real‐world experiments. Furthermore, the contributions of each step are demonstrated through ablation experiments and the proposed method is tested in the occlusion scenarios. Project code and data are available at https://github.com/TonyYounger‐Eg/Anything_Grasping.
ISSN:2640-4567