Intention Recognition of Space Noncooperative Targets Using Large Language Models
This study proposes a novel method for intention recognition of space noncooperative targets using large language models (LLMs). Traditional methods rely on motion data to assess orbital motion intentions but cannot infer operation and task intentions from multi-source information like images. LLMs,...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
American Association for the Advancement of Science (AAAS)
2025-01-01
|
| Series: | Space: Science & Technology |
| Online Access: | https://spj.science.org/doi/10.34133/space.0271 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | This study proposes a novel method for intention recognition of space noncooperative targets using large language models (LLMs). Traditional methods rely on motion data to assess orbital motion intentions but cannot infer operation and task intentions from multi-source information like images. LLMs, with their logical reasoning capabilities, can address this limitation. The intentions are categorized into 3 types and 23 subtypes based on multi-source information and their characteristics: motion intentions (e.g., “hovering”, “flyby”, and “rendezvous”), operation intentions (e.g., “docking”, “refueling”, and “repair”), and task intentions (e.g., “detection”, “surveillance”, and “attack”). The proposed method constructs LLMs for spacecraft intention recognition, involving prompt classification, template design, and test sample generation. The use of prompt tuning V2 (P-tuning V2) and low-rank adaptation (LoRA) fine-tuning enhances the models’ performance. A dataset of 50,688 nominal samples and 8,448 perturbed samples was created through computer simulation based on expert knowledge, focusing on intention recognition of approaching targets in space station on-orbit operation and surveillance scenarios. The models were tested under 3 prompt conditions: basic, instruction, and chain-of-thought (CoT). The performance of 6 models (ChatGLM2-6B and ChatGLM3-6B base and fine-tuned models) was analyzed. Notably, the LoRA fine-tuned ChatGLM3-6B model on instruction prompts achieved 99.9% accuracy, with improved robustness compared to the base model. This work presents a pioneering application of LLMs for spacecraft intention recognition, offering valuable insights for future research and applications. |
|---|---|
| ISSN: | 2692-7659 |