An Explainable LSTM-Based Intrusion Detection System Optimized by Firefly Algorithm for IoT Networks

As more IoT devices become connected to the Internet, the attack surface for cybercrimes expands, leading to significant security concerns for these devices. Existing intrusion detection systems (IDSs) designed to address these concerns often suffer from high rates of false positives and missed thre...

Full description

Saved in:
Bibliographic Details
Main Authors: Taiwo Blessing Ogunseyi, Gogulakrishan Thiyagarajan
Format: Article
Language:English
Published: MDPI AG 2025-04-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/25/7/2288
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:As more IoT devices become connected to the Internet, the attack surface for cybercrimes expands, leading to significant security concerns for these devices. Existing intrusion detection systems (IDSs) designed to address these concerns often suffer from high rates of false positives and missed threats due to the presence of redundant and irrelevant information for the IDSs. Furthermore, recent IDSs that utilize artificial intelligence are often presented as black boxes, offering no explanation of their internal operations. In this study, we develop a solution to the identified challenges by presenting a deep learning-based model that adapts to new attacks by selecting only the relevant information as inputs and providing transparent internal operations for easy understanding and adoption by cybersecurity personnel. Specifically, we employ a hybrid approach using statistical methods and a metaheuristic algorithm for feature selection to identify the most relevant features and limit the overall feature set while building an LSTM-based model for intrusion detection. To this end, we utilize two publicly available datasets, NF-BoT-IoT-v2 and IoTID20, for training and testing. The results demonstrate an accuracy of 98.42% and 89.54% for the NF-BoT-IoT-v2 and IoTID20 datasets, respectively. The performance of the proposed model is compared with that of other machine learning models and existing state-of-the-art models, demonstrating superior accuracy. To explain the proposed model’s predictions and increase trust in its outcomes, we applied two explainable artificial intelligence (XAI) tools: Local Interpretable Model-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP), providing valuable insights into the model’s behavior.
ISSN:1424-8220