Foundation models assist in human–robot collaboration assembly
Abstract Human–robot collaboration (HRC) is a novel manufacturing paradigm designed to fully leverage the advantage of humans and robots, efficiently and flexibly accomplishing customized manufacturing tasks. However, existing HRC systems lack the transfer and generalization capability for environme...
Saved in:
| Main Authors: | , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2024-10-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-024-75715-4 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850204493613891584 |
|---|---|
| author | Yuchen Ji Zequn Zhang Dunbing Tang Yi Zheng Changchun Liu Zhen Zhao Xinghui Li |
| author_facet | Yuchen Ji Zequn Zhang Dunbing Tang Yi Zheng Changchun Liu Zhen Zhao Xinghui Li |
| author_sort | Yuchen Ji |
| collection | DOAJ |
| description | Abstract Human–robot collaboration (HRC) is a novel manufacturing paradigm designed to fully leverage the advantage of humans and robots, efficiently and flexibly accomplishing customized manufacturing tasks. However, existing HRC systems lack the transfer and generalization capability for environment perception and task reasoning. These limitations manifest in: (1) current methods rely on specialized models to perceive scenes; and need retraining the model when facing unseen objects. (2) current methods only address predefined tasks, and cannot support undefined task reasoning. To avoid these limitations, this paper proposes a novel HRC approach based on Foundation Models (FMs), including Large Language models (LLMs) and Vision Foundation Models (VFMs). Specifically, a LLMs-based task reasoning method is introduced, utilizing prompt learning to transfer LLMs into the domain of HRC tasks, supporting undefined task reasoning. A VFMs-based scene semantic perception method is proposed, integrating various VFMs to achieve scene perception without training. Finally, a FMs-based HRC system is developed, comprising perception, reasoning, and execution modules for more flexible and generalized HRC. The superior performances of FMs in perception and reasoning are demonstrated by extensive experiments. Furthermore, the feasibility and effectiveness of the FMs-based HRC system are validated through an part assembly case involving a satellite component model. |
| format | Article |
| id | doaj-art-0b346d482d044dd28fcb7a2a4cc84fe3 |
| institution | OA Journals |
| issn | 2045-2322 |
| language | English |
| publishDate | 2024-10-01 |
| publisher | Nature Portfolio |
| record_format | Article |
| series | Scientific Reports |
| spelling | doaj-art-0b346d482d044dd28fcb7a2a4cc84fe32025-08-20T02:11:17ZengNature PortfolioScientific Reports2045-23222024-10-0114112110.1038/s41598-024-75715-4Foundation models assist in human–robot collaboration assemblyYuchen Ji0Zequn Zhang1Dunbing Tang2Yi Zheng3Changchun Liu4Zhen Zhao5Xinghui Li6College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and AstronauticsCollege of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and AstronauticsCollege of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and AstronauticsCollege of Biophotonics, South China Normal UniversityCollege of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and AstronauticsCollege of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and AstronauticsShenzhen International Graduate School, Tsinghua UniversityAbstract Human–robot collaboration (HRC) is a novel manufacturing paradigm designed to fully leverage the advantage of humans and robots, efficiently and flexibly accomplishing customized manufacturing tasks. However, existing HRC systems lack the transfer and generalization capability for environment perception and task reasoning. These limitations manifest in: (1) current methods rely on specialized models to perceive scenes; and need retraining the model when facing unseen objects. (2) current methods only address predefined tasks, and cannot support undefined task reasoning. To avoid these limitations, this paper proposes a novel HRC approach based on Foundation Models (FMs), including Large Language models (LLMs) and Vision Foundation Models (VFMs). Specifically, a LLMs-based task reasoning method is introduced, utilizing prompt learning to transfer LLMs into the domain of HRC tasks, supporting undefined task reasoning. A VFMs-based scene semantic perception method is proposed, integrating various VFMs to achieve scene perception without training. Finally, a FMs-based HRC system is developed, comprising perception, reasoning, and execution modules for more flexible and generalized HRC. The superior performances of FMs in perception and reasoning are demonstrated by extensive experiments. Furthermore, the feasibility and effectiveness of the FMs-based HRC system are validated through an part assembly case involving a satellite component model.https://doi.org/10.1038/s41598-024-75715-4Human–robot collaborationFoundation modelsLarge language modelsVision foundation modelsIntelligent manufacture |
| spellingShingle | Yuchen Ji Zequn Zhang Dunbing Tang Yi Zheng Changchun Liu Zhen Zhao Xinghui Li Foundation models assist in human–robot collaboration assembly Scientific Reports Human–robot collaboration Foundation models Large language models Vision foundation models Intelligent manufacture |
| title | Foundation models assist in human–robot collaboration assembly |
| title_full | Foundation models assist in human–robot collaboration assembly |
| title_fullStr | Foundation models assist in human–robot collaboration assembly |
| title_full_unstemmed | Foundation models assist in human–robot collaboration assembly |
| title_short | Foundation models assist in human–robot collaboration assembly |
| title_sort | foundation models assist in human robot collaboration assembly |
| topic | Human–robot collaboration Foundation models Large language models Vision foundation models Intelligent manufacture |
| url | https://doi.org/10.1038/s41598-024-75715-4 |
| work_keys_str_mv | AT yuchenji foundationmodelsassistinhumanrobotcollaborationassembly AT zequnzhang foundationmodelsassistinhumanrobotcollaborationassembly AT dunbingtang foundationmodelsassistinhumanrobotcollaborationassembly AT yizheng foundationmodelsassistinhumanrobotcollaborationassembly AT changchunliu foundationmodelsassistinhumanrobotcollaborationassembly AT zhenzhao foundationmodelsassistinhumanrobotcollaborationassembly AT xinghuili foundationmodelsassistinhumanrobotcollaborationassembly |