Trustworthy AI: Securing Sensitive Data in Large Language Models

Large language models (LLMs) have transformed Natural Language Processing (NLP) by enabling robust text generation and understanding. However, their deployment in sensitive domains like healthcare, finance, and legal services raises critical concerns about privacy and data security. This paper propo...

Full description

Saved in:
Bibliographic Details
Main Authors: Georgios Feretzakis, Vassilios S. Verykios
Format: Article
Language:English
Published: MDPI AG 2024-12-01
Series:AI
Subjects:
Online Access:https://www.mdpi.com/2673-2688/5/4/134
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850042399421628416
author Georgios Feretzakis
Vassilios S. Verykios
author_facet Georgios Feretzakis
Vassilios S. Verykios
author_sort Georgios Feretzakis
collection DOAJ
description Large language models (LLMs) have transformed Natural Language Processing (NLP) by enabling robust text generation and understanding. However, their deployment in sensitive domains like healthcare, finance, and legal services raises critical concerns about privacy and data security. This paper proposes a comprehensive framework for embedding trust mechanisms into LLMs to dynamically control the disclosure of sensitive information. The framework integrates three core components: User Trust Profiling, Information Sensitivity Detection, and Adaptive Output Control. By leveraging techniques such as Role-Based Access Control (RBAC), Attribute-Based Access Control (ABAC), Named Entity Recognition (NER), contextual analysis, and privacy-preserving methods like differential privacy, the system ensures that sensitive information is disclosed appropriately based on the user’s trust level. By focusing on balancing data utility and privacy, the proposed solution offers a novel approach to securely deploying LLMs in high-risk environments. Future work will focus on testing this framework across various domains to evaluate its effectiveness in managing sensitive data while maintaining system efficiency.
format Article
id doaj-art-b7e937b0289a4f8bbb6ef0dbe5888f80
institution DOAJ
issn 2673-2688
language English
publishDate 2024-12-01
publisher MDPI AG
record_format Article
series AI
spelling doaj-art-b7e937b0289a4f8bbb6ef0dbe5888f802025-08-20T02:55:35ZengMDPI AGAI2673-26882024-12-01542773280010.3390/ai5040134Trustworthy AI: Securing Sensitive Data in Large Language ModelsGeorgios Feretzakis0Vassilios S. Verykios1School of Science and Technology, Hellenic Open University, 26131 Patras, GreeceSchool of Science and Technology, Hellenic Open University, 26131 Patras, GreeceLarge language models (LLMs) have transformed Natural Language Processing (NLP) by enabling robust text generation and understanding. However, their deployment in sensitive domains like healthcare, finance, and legal services raises critical concerns about privacy and data security. This paper proposes a comprehensive framework for embedding trust mechanisms into LLMs to dynamically control the disclosure of sensitive information. The framework integrates three core components: User Trust Profiling, Information Sensitivity Detection, and Adaptive Output Control. By leveraging techniques such as Role-Based Access Control (RBAC), Attribute-Based Access Control (ABAC), Named Entity Recognition (NER), contextual analysis, and privacy-preserving methods like differential privacy, the system ensures that sensitive information is disclosed appropriately based on the user’s trust level. By focusing on balancing data utility and privacy, the proposed solution offers a novel approach to securely deploying LLMs in high-risk environments. Future work will focus on testing this framework across various domains to evaluate its effectiveness in managing sensitive data while maintaining system efficiency.https://www.mdpi.com/2673-2688/5/4/134large language modelstrust mechanismssensitive informationrole-based access controlattribute-based access controldata privacy
spellingShingle Georgios Feretzakis
Vassilios S. Verykios
Trustworthy AI: Securing Sensitive Data in Large Language Models
AI
large language models
trust mechanisms
sensitive information
role-based access control
attribute-based access control
data privacy
title Trustworthy AI: Securing Sensitive Data in Large Language Models
title_full Trustworthy AI: Securing Sensitive Data in Large Language Models
title_fullStr Trustworthy AI: Securing Sensitive Data in Large Language Models
title_full_unstemmed Trustworthy AI: Securing Sensitive Data in Large Language Models
title_short Trustworthy AI: Securing Sensitive Data in Large Language Models
title_sort trustworthy ai securing sensitive data in large language models
topic large language models
trust mechanisms
sensitive information
role-based access control
attribute-based access control
data privacy
url https://www.mdpi.com/2673-2688/5/4/134
work_keys_str_mv AT georgiosferetzakis trustworthyaisecuringsensitivedatainlargelanguagemodels
AT vassiliossverykios trustworthyaisecuringsensitivedatainlargelanguagemodels