Trustworthy AI: Securing Sensitive Data in Large Language Models
Large language models (LLMs) have transformed Natural Language Processing (NLP) by enabling robust text generation and understanding. However, their deployment in sensitive domains like healthcare, finance, and legal services raises critical concerns about privacy and data security. This paper propo...
Saved in:
| Main Authors: | Georgios Feretzakis, Vassilios S. Verykios |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2024-12-01
|
| Series: | AI |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2673-2688/5/4/134 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Enhancing Healthcare Security: A Unified RBAC and ABAC Risk-Aware Access Control Approach
by: Hany F. Atlam, et al.
Published: (2025-06-01) -
Design of Role Based Access Control for Triadic Concept Analysis
by: WANG Jing yu, et al.
Published: (2020-04-01) -
Formal Verification for Preventing Misconfigured Access Policies in Kubernetes Clusters
by: Aditya Sissodiya, et al.
Published: (2025-01-01) -
An Efficient Fine-Grained Access Control Scheme Based on Policy Protection in SGs
by: Xiaoqing Guo, et al.
Published: (2025-01-01) -
Jointly Achieving Smart Homes Security and Privacy through Bidirectional Trust
by: Osman Abul, et al.
Published: (2025-04-01)