ChatEarthNet: a global-scale image–text dataset empowering vision–language geo-foundation models

<p>The rapid development of remote sensing technology has led to an exponential growth in satellite images, yet their inherent complexity often makes them difficult for non-expert users to understand. Natural language, as a carrier of human knowledge, can bridge the gap between common users an...

Full description

Saved in:
Bibliographic Details
Main Authors: Z. Yuan, Z. Xiong, L. Mou, X. X. Zhu
Format: Article
Language:English
Published: Copernicus Publications 2025-03-01
Series:Earth System Science Data
Online Access:https://essd.copernicus.org/articles/17/1245/2025/essd-17-1245-2025.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850095693863059456
author Z. Yuan
Z. Xiong
L. Mou
X. X. Zhu
X. X. Zhu
author_facet Z. Yuan
Z. Xiong
L. Mou
X. X. Zhu
X. X. Zhu
author_sort Z. Yuan
collection DOAJ
description <p>The rapid development of remote sensing technology has led to an exponential growth in satellite images, yet their inherent complexity often makes them difficult for non-expert users to understand. Natural language, as a carrier of human knowledge, can bridge the gap between common users and complicated satellite imagery. Additionally, when paired with visual data, natural language can be utilized to train large vision–language foundation models, significantly improving performance in various tasks. Despite these advancements, the remote sensing community still faces a challenge due to the lack of large-scale, high-quality vision–language datasets for satellite images. To address this challenge, we introduce a new image–text dataset, providing high-quality natural language descriptions for global-scale satellite data. Specifically, we utilize Sentinel-2 data for its global coverage as the foundational image source, employing semantic segmentation labels from the European Space Agency's WorldCover project to enrich the descriptions of land cover types. By conducting in-depth semantic analysis, we formulate detailed prompts to elicit rich descriptions from ChatGPT. We then include a manual verification process to enhance the dataset's quality further. This step involves manual inspection and correction to refine the dataset. Finally, we offer the community ChatEarthNet, a large-scale image–text dataset characterized by global coverage, high quality, wide-ranging diversity, and detailed descriptions. ChatEarthNet consists of 163 488 image–text pairs with captions generated by ChatGPT-3.5 and an additional 10 000 image–text pairs with captions generated by ChatGPT-4V(ision). This dataset has significant potential for both training and evaluating vision–language geo-foundation models for remote sensing. The code is publicly available at <a href="https://doi.org/10.5281/zenodo.11004358">https://doi.org/10.5281/zenodo.11004358</a> <span class="cit" id="xref_paren.1">(<a href="#bib1.bibx37">Yuan et al.</a>, <a href="#bib1.bibx37">2024</a><a href="#bib1.bibx37">b</a>)</span>, and the ChatEarthNet dataset is available at <a href="https://doi.org/10.5281/zenodo.11003436">https://doi.org/10.5281/zenodo.11003436</a> <span class="cit" id="xref_paren.2">(<a href="#bib1.bibx38">Yuan et al.</a>, <a href="#bib1.bibx38">2024</a><a href="#bib1.bibx38">c</a>)</span>.</p>
format Article
id doaj-art-e070abec555a45cb935bd7944d42c9c8
institution DOAJ
issn 1866-3508
1866-3516
language English
publishDate 2025-03-01
publisher Copernicus Publications
record_format Article
series Earth System Science Data
spelling doaj-art-e070abec555a45cb935bd7944d42c9c82025-08-20T02:41:23ZengCopernicus PublicationsEarth System Science Data1866-35081866-35162025-03-01171245126310.5194/essd-17-1245-2025ChatEarthNet: a global-scale image–text dataset empowering vision–language geo-foundation modelsZ. Yuan0Z. Xiong1L. Mou2X. X. Zhu3X. X. Zhu4Data Science in Earth Observation, Technical University of Munich, 80333 Munich, GermanyData Science in Earth Observation, Technical University of Munich, 80333 Munich, GermanyData Science in Earth Observation, Technical University of Munich, 80333 Munich, GermanyData Science in Earth Observation, Technical University of Munich, 80333 Munich, GermanyMunich Center for Machine Learning, 80333 Munich, Germany<p>The rapid development of remote sensing technology has led to an exponential growth in satellite images, yet their inherent complexity often makes them difficult for non-expert users to understand. Natural language, as a carrier of human knowledge, can bridge the gap between common users and complicated satellite imagery. Additionally, when paired with visual data, natural language can be utilized to train large vision–language foundation models, significantly improving performance in various tasks. Despite these advancements, the remote sensing community still faces a challenge due to the lack of large-scale, high-quality vision–language datasets for satellite images. To address this challenge, we introduce a new image–text dataset, providing high-quality natural language descriptions for global-scale satellite data. Specifically, we utilize Sentinel-2 data for its global coverage as the foundational image source, employing semantic segmentation labels from the European Space Agency's WorldCover project to enrich the descriptions of land cover types. By conducting in-depth semantic analysis, we formulate detailed prompts to elicit rich descriptions from ChatGPT. We then include a manual verification process to enhance the dataset's quality further. This step involves manual inspection and correction to refine the dataset. Finally, we offer the community ChatEarthNet, a large-scale image–text dataset characterized by global coverage, high quality, wide-ranging diversity, and detailed descriptions. ChatEarthNet consists of 163 488 image–text pairs with captions generated by ChatGPT-3.5 and an additional 10 000 image–text pairs with captions generated by ChatGPT-4V(ision). This dataset has significant potential for both training and evaluating vision–language geo-foundation models for remote sensing. The code is publicly available at <a href="https://doi.org/10.5281/zenodo.11004358">https://doi.org/10.5281/zenodo.11004358</a> <span class="cit" id="xref_paren.1">(<a href="#bib1.bibx37">Yuan et al.</a>, <a href="#bib1.bibx37">2024</a><a href="#bib1.bibx37">b</a>)</span>, and the ChatEarthNet dataset is available at <a href="https://doi.org/10.5281/zenodo.11003436">https://doi.org/10.5281/zenodo.11003436</a> <span class="cit" id="xref_paren.2">(<a href="#bib1.bibx38">Yuan et al.</a>, <a href="#bib1.bibx38">2024</a><a href="#bib1.bibx38">c</a>)</span>.</p>https://essd.copernicus.org/articles/17/1245/2025/essd-17-1245-2025.pdf
spellingShingle Z. Yuan
Z. Xiong
L. Mou
X. X. Zhu
X. X. Zhu
ChatEarthNet: a global-scale image–text dataset empowering vision–language geo-foundation models
Earth System Science Data
title ChatEarthNet: a global-scale image–text dataset empowering vision–language geo-foundation models
title_full ChatEarthNet: a global-scale image–text dataset empowering vision–language geo-foundation models
title_fullStr ChatEarthNet: a global-scale image–text dataset empowering vision–language geo-foundation models
title_full_unstemmed ChatEarthNet: a global-scale image–text dataset empowering vision–language geo-foundation models
title_short ChatEarthNet: a global-scale image–text dataset empowering vision–language geo-foundation models
title_sort chatearthnet a global scale image text dataset empowering vision language geo foundation models
url https://essd.copernicus.org/articles/17/1245/2025/essd-17-1245-2025.pdf
work_keys_str_mv AT zyuan chatearthnetaglobalscaleimagetextdatasetempoweringvisionlanguagegeofoundationmodels
AT zxiong chatearthnetaglobalscaleimagetextdatasetempoweringvisionlanguagegeofoundationmodels
AT lmou chatearthnetaglobalscaleimagetextdatasetempoweringvisionlanguagegeofoundationmodels
AT xxzhu chatearthnetaglobalscaleimagetextdatasetempoweringvisionlanguagegeofoundationmodels
AT xxzhu chatearthnetaglobalscaleimagetextdatasetempoweringvisionlanguagegeofoundationmodels