CerraData-4 MM: A Multimodal Benchmark Dataset on Cerrado for Land Use and Land Cover Classification
The <italic>Cerrado</italic> faces increasing environmental pressures, necessitating accurate land use and land cover mapping despite challenges, such as class imbalance and visually similar categories. To address this, we present CerraData-4 MM, a multimodal dataset combining Sentinel-1...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11068119/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | The <italic>Cerrado</italic> faces increasing environmental pressures, necessitating accurate land use and land cover mapping despite challenges, such as class imbalance and visually similar categories. To address this, we present CerraData-4 MM, a multimodal dataset combining Sentinel-1 synthetic aperture radar and Sentinel-2 multispectral imagery with 10 m spatial resolution. The dataset includes two hierarchical classification levels with seven and 14 classes, respectively, focusing on the diverse <italic>Bico do Papagaio</italic> ecoregion. We benchmark two models trained on CerraData-4 MM, employing a visual transformer-based architecture and a convolutional-based architecture. The ViT achieves superior performance in multimodal scenarios, with the highest macro F1-score of 57.60% and a mean Intersection over Union of 49.05% at the first hierarchical level. Both models struggle with minority classes, particularly at the second hierarchical level, where U-Net’s performance drops to an F1-score of 18.16%. Weighted loss improves representation for underrepresented classes but reduces overall accuracy, underscoring the trade-off in weighted training. CerraData-4 MM offers a challenging benchmark for advancing deep learning models to handle class imbalance and multimodal data fusion. |
|---|---|
| ISSN: | 1939-1404 2151-1535 |