ZPVQA: Visual Question Answering of Images Based on Zero-Shot Prompt Learning
In recent years, the use of zero-shot learning to solve visual question-answering (VQA) problems has become a common strategy to address the challenges of complex interactions between visual and verbal modalities. Despite the significant progress of large-scale language models (LLMs) in language tas...
Saved in:
| Main Authors: | Naihao Hu, Xiaodan Zhang, Qiyuan Zhang, Wei Huo, Shaojie You |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10925449/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Few-shot cybersecurity event detection method by data augmentation with prompting question answering
by: TANG Mengmeng, et al.
Published: (2024-08-01) -
Analyzing Diagnostic Reasoning of Vision–Language Models via Zero-Shot Chain-of-Thought Prompting in Medical Visual Question Answering
by: Fatema Tuj Johora Faria, et al.
Published: (2025-07-01) -
Assessing the performance of zero-shot visual question answering in multimodal large language models for 12-lead ECG image interpretation
by: Tomohisa Seki, et al.
Published: (2025-02-01) -
Zero-Shot Prompting Strategies for Table Question Answering with a Low-Resource Language
by: Marcelo Jannuzzi, et al.
Published: (2024-10-01) -
Adapter With Textual Knowledge Graph for Zero-Shot Sketch-Based Image Retrieval
by: Jie Zhang, et al.
Published: (2025-01-01)