From Prompts to Motors: Man-in-the-Middle Attacks on LLM-Enabled Vacuum Robots
The integration of large language models (LLMs) into robotic platforms is transforming human–robot interaction by enabling more natural communication and adaptive task execution. However, this advancement also introduces new security vulnerabilities, particularly in networked environments...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11108294/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | The integration of large language models (LLMs) into robotic platforms is transforming human–robot interaction by enabling more natural communication and adaptive task execution. However, this advancement also introduces new security vulnerabilities, particularly in networked environments. In this study, we present a systematic analysis of man-in-the-middle (MITM) attacks targeting an LLM-enabled vacuum robot. Our research follows a three-phase development process: 1) command-line simulation of LLM–robot interactions, 2) tabletop setup, and 3) implementation of a physical robot using a commercial vacuum platform enhanced with a Raspberry Pi–hosted ChatGPT application programming interface (API) and you only look once (YOLO, v8) object detection. We define a gray-box threat model in which an attacker can intercept, inject, and manipulate JavaScript object notation (JSON)-formatted messages exchanged between the robot and the LLM. We evaluate four attack scenarios, two based on prompt injection and two on output manipulation, across three LLM configurations (ChatGPT-4, ChatGPT-4o mini, and ChatGPT-3.5 Turbo). While prior work on LLM security assumes secure communication channels and overlooks network-level threats, our experimental results demonstrate that a remote attacker can bypass safety protocols, override motor commands, and deliver deceptive feedback to users, ultimately leading to unsafe robot behavior. These findings reveal a critical and underexplored attack surface in LLM-integrated robotic systems and highlight the urgent need for secure-by-design communication architectures. |
|---|---|
| ISSN: | 2169-3536 |