The CLIP - GPT Image Captioning Model Integrated with Global Semantics
Image captioning is a method for automatically generating language descriptions for images. Cross-modal semantic consistency is the core issue of shared subspace embedding when bridging pre-training models in the fields of computer vision and natural language processing to construct image captio...
Saved in:
| Main Authors: | TAO Rui, REN Honge, CAO Haiyan |
|---|---|
| Format: | Article |
| Language: | zho |
| Published: |
Harbin University of Science and Technology Publications
2024-04-01
|
| Series: | Journal of Harbin University of Science and Technology |
| Subjects: | |
| Online Access: | https://hlgxb.hrbust.edu.cn/#/digest?ArticleID=2307 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Image Captioning Based on Semantic Scenes
by: Fengzhi Zhao, et al.
Published: (2024-10-01) -
Enhanced CLIP-GPT Framework for Cross-Lingual Remote Sensing Image Captioning
by: Rui Song, et al.
Published: (2025-01-01) -
Semantic-Guided Selective Representation for Image Captioning
by: Yinan Li, et al.
Published: (2023-01-01) -
Automated Ultrasound Diagnosis via CLIP-GPT Synergy: A Multimodal Framework for Image Classification and Report Generation
by: Li Yan, et al.
Published: (2025-01-01) -
NuCap: A Numerically Aware Captioning Framework for Improved Numerical Reasoning
by: Yuna Jeong, et al.
Published: (2025-05-01)