Meticulous Thought Defender: Fine-Grained Chain-of-Thought (CoT) for Detecting Prompt Injection Attacks of Large Language Models
Large language models (LLMs) have exhibited exceptional capabilities across various natural language processing tasks, however, they remain susceptible to prompt injection attacks, which pose significant security challenges. Traditional detection methods often fail to effectively identify such attac...
Saved in:
| Main Authors: | Lijuan Shi, Yajing Kang, Jie Hu, Xinchi Li, Mingchuan Yang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11053836/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Measuring and Improving the Efficiency of Python Code Generated by LLMs Using CoT Prompting and Fine-Tuning
by: Ramya Jonnala, et al.
Published: (2025-01-01) -
Prompt Engineering for Knowledge Creation: Using Chain-of-Thought to Support Students’ Improvable Ideas
by: Alwyn Vwen Yen Lee, et al.
Published: (2024-08-01) -
Analyzing Diagnostic Reasoning of Vision–Language Models via Zero-Shot Chain-of-Thought Prompting in Medical Visual Question Answering
by: Fatema Tuj Johora Faria, et al.
Published: (2025-07-01) -
From Prompts to Motors: Man-in-the-Middle Attacks on LLM-Enabled Vacuum Robots
by: Asif Shaikh, et al.
Published: (2025-01-01) -
Syntactic-Guided Chain of Thought for Iterative Implicit and Explicit Target Detection in Aspect-Based Sentiment Analysis
by: Mohammad Radi, et al.
Published: (2025-01-01)