GPT understands, too
Prompting a pretrained language model with natural language patterns has been proved effective for natural language understanding (NLU). However, our preliminary study reveals that manual discrete prompts often lead to unstable performance—e.g., changing a single word in the prompt might result in s...
Saved in:
| Main Authors: | Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, Jie Tang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
KeAi Communications Co. Ltd.
2024-01-01
|
| Series: | AI Open |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S2666651023000141 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
CPT: Colorful Prompt Tuning for pre-trained vision-language models
by: Yuan Yao, et al.
Published: (2024-01-01) -
Cue prompt adapting model for relation extraction
by: Kai Wang, et al.
Published: (2023-12-01) -
An Adapted Few-Shot Prompting Technique Using ChatGPT to Advance Low-Resource Languages Understanding
by: Saedeh Tahery, et al.
Published: (2025-01-01) -
Prompt Tuning Techniques for Chinese Idiom Recommendation
by: Shun-Ming Wang, et al.
Published: (2025-01-01) -
A Study on Text Classification in the Age of Large Language Models
by: Paul Trust, et al.
Published: (2024-11-01)