Securing LLM Workloads With NIST AI RMF in the Internet of Robotic Things
The Internet of Robotic Things (IoRT) is revolutionizing industries by enabling autonomous, AI-driven robotic systems to perform complex and collaborative tasks, such as precision agriculture, disaster response, and logistic shipping operations. However, integrating AI into IoRT introduces significa...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10965643/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | The Internet of Robotic Things (IoRT) is revolutionizing industries by enabling autonomous, AI-driven robotic systems to perform complex and collaborative tasks, such as precision agriculture, disaster response, and logistic shipping operations. However, integrating AI into IoRT introduces significant challenges, including security vulnerabilities, adversarial attacks, data integrity risks, and operational disruptions in dynamic and high-stakes environments. This paper addresses these challenges by integrating and enhancing the NIST AI Risk Management Framework (AI RMF) for IoRT systems, providing a structured approach to identify, assess, and mitigate risks specific to IoRT ecosystems. We introduce a novel Large Language Model (LLM)-based approach for translating natural language commands into secure and precise robotic operations, enabling seamless collaboration and enhancing safety and reliability in mission-critical scenarios. Using a flood recovery scenario in precision agriculture, we demonstrate the practical application of these solutions, where swarm robots equipped with AI inference engines collaborate to navigate hazards, locate individuals, assess infrastructure damage, and mitigate risks. A comprehensive threat analysis is presented, mapping identified vulnerabilities to the NIST AI RMF, and tailored security controls are proposed to mitigate these threats effectively. We propose critical enhancements to the framework, including advanced quantitative risk assessment methods, subsystem governance strategies for interconnected IoRT networks, and robust auditing mechanisms to address unique IoRT-specific challenges. This work establishes a robust foundation for aligning AI governance frameworks with the complex and dynamic demands of IoRT systems. By addressing security, operational, and ethical considerations, it fosters secure, efficient, and trustworthy deployment across diverse applications, paving the way for sustainable and impactful IoRT innovations. |
|---|---|
| ISSN: | 2169-3536 |