VLA-Grasp: a vision-language-action modeling with cross-modality fusion for task-oriented grasping
Abstract Task-oriented grasping (TOG) aims to predict the appropriate pose for grasping based on a specific task. While recent approaches have incorporated semantic knowledge into TOG models to enable robots to understand linguistic commands, they lack the ability to leverage relevant information fr...
Saved in:
| Main Authors: | Jianwei Zhu, Xueying Sun, Qiang Zhang, Mingmin Liu |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Springer
2025-05-01
|
| Series: | Complex & Intelligent Systems |
| Subjects: | |
| Online Access: | https://doi.org/10.1007/s40747-025-01893-x |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Command-driven semantic robotic grasping towards user-specified tasks
by: Qing Lyu, et al.
Published: (2025-06-01) -
SoftGrasp: Adaptive grasping for dexterous hand based on multimodal imitation learning
by: Yihong Li, et al.
Published: (2025-06-01) -
Low-Damage Grasp Method for Plug Seedlings Based on Machine Vision and Deep Learning
by: Fengwei Yuan, et al.
Published: (2025-06-01) -
Improving robotic grasping accuracy through oriented bounding box detection with YOLOv11-OBB
by: Vo Duy Cong, et al.
Published: (2025-07-01) -
GraspLDM: Generative 6-DoF Grasp Synthesis Using Latent Diffusion Models
by: Kuldeep R. Barad, et al.
Published: (2024-01-01)