Explainable and cognitive attention evoked learning framework for mitigating the large-scale real time cyber attacks

Abstract Cyber-attacks have perceived explosive growth in today’s technologically driven world, significantly impacting critical sectors such as healthcare, finance, and industrial automation systems. With these vulnerabilities increasing daily, addressing them has become a major challenge for resea...

Full description

Saved in:
Bibliographic Details
Main Authors: Ragipani Sowmya, Bhagavan Konduri
Format: Article
Language:English
Published: Springer 2025-07-01
Series:Discover Computing
Subjects:
Online Access:https://doi.org/10.1007/s10791-025-09672-5
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Cyber-attacks have perceived explosive growth in today’s technologically driven world, significantly impacting critical sectors such as healthcare, finance, and industrial automation systems. With these vulnerabilities increasing daily, addressing them has become a major challenge for researchers. Adding to this complexity, zero-day attacks are penetrating deeply into people’s lives and pose a significant threat to mitigation efforts. Modern Intrusion Detection Systems (MIDS), which leverage the strengths of Artificial Intelligence (AI) and Deep Learning (DL), have emerged as promising solutions for detecting various types of attacks. To ensure a highly secure environment, this paper proposes an Explainable Deep Learning Framework (X-DLF) for the classification of diverse cyber-attacks. The proposed system integrates an Attention-Evoked Gated Recurrent (AEGR) and Long Short-Term Memory Network (AEGR-LSTM) to mitigate threats posed by malicious activities. The X-DLF not only detects intrusions but also provides interpretability, offering insights into the rationale behind each classification decision. Extensive experiments were conducted using a variety of benchmark datasets, and performance metrics like specificity, recall, accuracy, F1-score and precision were computed and examined with existing learning models. Furthermore, the flexibility and scalability of the proposed approach were validated using Local Interpretable Model-Agnostic Explanations (LIME), which helped identify the most significant features influencing the model’s decisions. Finally, the effectiveness of the proposed X-DLF was demonstrated by comparison with existing deep learning approaches, achieving superior results with 97.5% accuracy, 97.3% precision, 97.1% recall, and 97.5% F1-score.
ISSN:2948-2992