FedITD: A Federated Parameter-Efficient Tuning With Pre-Trained Large Language Models and Transfer Learning Framework for Insider Threat Detection

Insider threats cause greater losses than external attacks, prompting organizations to invest in detection systems. However, there exist challenges: 1) Security and privacy concerns prevent data sharing, making it difficult to train robust models and identify new attacks. 2) The diversity and unique...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhi Qiang Wang, Haopeng Wang, Abdulmotaleb El Saddik
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10721229/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850198740274511872
author Zhi Qiang Wang
Haopeng Wang
Abdulmotaleb El Saddik
author_facet Zhi Qiang Wang
Haopeng Wang
Abdulmotaleb El Saddik
author_sort Zhi Qiang Wang
collection DOAJ
description Insider threats cause greater losses than external attacks, prompting organizations to invest in detection systems. However, there exist challenges: 1) Security and privacy concerns prevent data sharing, making it difficult to train robust models and identify new attacks. 2) The diversity and uniqueness of organizations require localized models, as a universal solution could be more effective. 3) High resource costs, delays, and data security concerns complicate building effective detection systems. This paper introduces FedITD, a flexible, hierarchy, and federated framework with local real-time detection systems, combining Large Language Models (LLM), Federated Learning (FL), Parameter Efficient Tuning (PETuning), and Transfer Learning (TF) for insider threat detection. FedITD uses FL to protect privacy while indirect integrating client information and employs PETuning methods (Adapter, BitFit, LoRA) with LLMs (BERT, RoBERTa, XLNet, DistilBERT) to reduce resource use and time delay. FedITD customizes client models and optimizes performance via transfer learning without central data transfer, further enhancing the detection of new attacks. FedITD outperforms other federated learning methods and its performance is very close to the best centrally trained method. Extensive experiment results show FedITD’s superior performance, adaptability to varied data, and reduction of resource costs, achieving an optimal balance in detection capabilities across source data, unlabeled local data, and global data. Alternative PETuning implementations are also explored in this paper.
format Article
id doaj-art-306a7d3a0c8e45db94214f8d87309b6d
institution OA Journals
issn 2169-3536
language English
publishDate 2024-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-306a7d3a0c8e45db94214f8d87309b6d2025-08-20T02:12:49ZengIEEEIEEE Access2169-35362024-01-011216039616041710.1109/ACCESS.2024.348298810721229FedITD: A Federated Parameter-Efficient Tuning With Pre-Trained Large Language Models and Transfer Learning Framework for Insider Threat DetectionZhi Qiang Wang0https://orcid.org/0009-0007-6547-6076Haopeng Wang1https://orcid.org/0000-0002-2876-5625Abdulmotaleb El Saddik2https://orcid.org/0000-0002-7690-8547Multimedia Communications Research Laboratory (MCRLab), School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON, CanadaMultimedia Communications Research Laboratory (MCRLab), School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON, CanadaMultimedia Communications Research Laboratory (MCRLab), School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, ON, CanadaInsider threats cause greater losses than external attacks, prompting organizations to invest in detection systems. However, there exist challenges: 1) Security and privacy concerns prevent data sharing, making it difficult to train robust models and identify new attacks. 2) The diversity and uniqueness of organizations require localized models, as a universal solution could be more effective. 3) High resource costs, delays, and data security concerns complicate building effective detection systems. This paper introduces FedITD, a flexible, hierarchy, and federated framework with local real-time detection systems, combining Large Language Models (LLM), Federated Learning (FL), Parameter Efficient Tuning (PETuning), and Transfer Learning (TF) for insider threat detection. FedITD uses FL to protect privacy while indirect integrating client information and employs PETuning methods (Adapter, BitFit, LoRA) with LLMs (BERT, RoBERTa, XLNet, DistilBERT) to reduce resource use and time delay. FedITD customizes client models and optimizes performance via transfer learning without central data transfer, further enhancing the detection of new attacks. FedITD outperforms other federated learning methods and its performance is very close to the best centrally trained method. Extensive experiment results show FedITD’s superior performance, adaptability to varied data, and reduction of resource costs, achieving an optimal balance in detection capabilities across source data, unlabeled local data, and global data. Alternative PETuning implementations are also explored in this paper.https://ieeexplore.ieee.org/document/10721229/Cybersecurityinsider threatdeep learningtransformerBERTRoBERTa
spellingShingle Zhi Qiang Wang
Haopeng Wang
Abdulmotaleb El Saddik
FedITD: A Federated Parameter-Efficient Tuning With Pre-Trained Large Language Models and Transfer Learning Framework for Insider Threat Detection
IEEE Access
Cybersecurity
insider threat
deep learning
transformer
BERT
RoBERTa
title FedITD: A Federated Parameter-Efficient Tuning With Pre-Trained Large Language Models and Transfer Learning Framework for Insider Threat Detection
title_full FedITD: A Federated Parameter-Efficient Tuning With Pre-Trained Large Language Models and Transfer Learning Framework for Insider Threat Detection
title_fullStr FedITD: A Federated Parameter-Efficient Tuning With Pre-Trained Large Language Models and Transfer Learning Framework for Insider Threat Detection
title_full_unstemmed FedITD: A Federated Parameter-Efficient Tuning With Pre-Trained Large Language Models and Transfer Learning Framework for Insider Threat Detection
title_short FedITD: A Federated Parameter-Efficient Tuning With Pre-Trained Large Language Models and Transfer Learning Framework for Insider Threat Detection
title_sort feditd a federated parameter efficient tuning with pre trained large language models and transfer learning framework for insider threat detection
topic Cybersecurity
insider threat
deep learning
transformer
BERT
RoBERTa
url https://ieeexplore.ieee.org/document/10721229/
work_keys_str_mv AT zhiqiangwang feditdafederatedparameterefficienttuningwithpretrainedlargelanguagemodelsandtransferlearningframeworkforinsiderthreatdetection
AT haopengwang feditdafederatedparameterefficienttuningwithpretrainedlargelanguagemodelsandtransferlearningframeworkforinsiderthreatdetection
AT abdulmotalebelsaddik feditdafederatedparameterefficienttuningwithpretrainedlargelanguagemodelsandtransferlearningframeworkforinsiderthreatdetection