Automatic fused multimodal deep learning for plant identification

IntroductionPlant classification is vital for ecological conservation and agricultural productivity, enhancing our understanding of plant growth dynamics and aiding species preservation. The advent of deep learning (DL) techniques has revolutionized this field by enabling autonomous feature extracti...

Full description

Saved in:
Bibliographic Details
Main Authors: Alfreds Lapkovskis, Natalia Nefedova, Ali Beikmohammadi
Format: Article
Language:English
Published: Frontiers Media S.A. 2025-08-01
Series:Frontiers in Plant Science
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fpls.2025.1616020/full
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849397014558670848
author Alfreds Lapkovskis
Natalia Nefedova
Ali Beikmohammadi
author_facet Alfreds Lapkovskis
Natalia Nefedova
Ali Beikmohammadi
author_sort Alfreds Lapkovskis
collection DOAJ
description IntroductionPlant classification is vital for ecological conservation and agricultural productivity, enhancing our understanding of plant growth dynamics and aiding species preservation. The advent of deep learning (DL) techniques has revolutionized this field by enabling autonomous feature extraction, significantly reducing the dependence on manual expertise. However, conventional DL models often rely solely on single data sources, failing to capture the full biological diversity of plant species comprehensively. Recent research has turned to multimodal learning to overcome this limitation by integrating multiple data types, which enriches the representation of plant characteristics. This shift introduces the challenge of determining the optimal point for modality fusion.MethodsIn this paper, we introduce a pioneering multimodal DL-based approach for plant classification with automatic modality fusion. Utilizing the multimodal fusion architecture search, our method integrates images from multiple plant organs—flowers, leaves, fruits, and stems—into a cohesive model. To address the lack of multimodal datasets, we contributed Multimodal-PlantCLEF, a restructured version of the PlantCLEF2015 dataset tailored for multimodal tasks.ResultsOur method achieves 82.61% accuracy on 979 classes of Multimodal-PlantCLEF, outperforming late fusion by 10.33%. Through the incorporation of multimodal dropout, our approach demonstrates strong robustness to missing modalities. We validate our model against established benchmarks using standard performance metrics and McNemar’s test, further underscoring its superiority.DiscussionThe proposed model surpasses state-of-the-art methods, highlighting the effectiveness of multimodality and an optimal fusion strategy. Our findings open a promising direction in future plant classification research.
format Article
id doaj-art-a4d7ef25d72c4eebbf742a2e05ec2203
institution Kabale University
issn 1664-462X
language English
publishDate 2025-08-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Plant Science
spelling doaj-art-a4d7ef25d72c4eebbf742a2e05ec22032025-08-20T03:39:10ZengFrontiers Media S.A.Frontiers in Plant Science1664-462X2025-08-011610.3389/fpls.2025.16160201616020Automatic fused multimodal deep learning for plant identificationAlfreds LapkovskisNatalia NefedovaAli BeikmohammadiIntroductionPlant classification is vital for ecological conservation and agricultural productivity, enhancing our understanding of plant growth dynamics and aiding species preservation. The advent of deep learning (DL) techniques has revolutionized this field by enabling autonomous feature extraction, significantly reducing the dependence on manual expertise. However, conventional DL models often rely solely on single data sources, failing to capture the full biological diversity of plant species comprehensively. Recent research has turned to multimodal learning to overcome this limitation by integrating multiple data types, which enriches the representation of plant characteristics. This shift introduces the challenge of determining the optimal point for modality fusion.MethodsIn this paper, we introduce a pioneering multimodal DL-based approach for plant classification with automatic modality fusion. Utilizing the multimodal fusion architecture search, our method integrates images from multiple plant organs—flowers, leaves, fruits, and stems—into a cohesive model. To address the lack of multimodal datasets, we contributed Multimodal-PlantCLEF, a restructured version of the PlantCLEF2015 dataset tailored for multimodal tasks.ResultsOur method achieves 82.61% accuracy on 979 classes of Multimodal-PlantCLEF, outperforming late fusion by 10.33%. Through the incorporation of multimodal dropout, our approach demonstrates strong robustness to missing modalities. We validate our model against established benchmarks using standard performance metrics and McNemar’s test, further underscoring its superiority.DiscussionThe proposed model surpasses state-of-the-art methods, highlighting the effectiveness of multimodality and an optimal fusion strategy. Our findings open a promising direction in future plant classification research.https://www.frontiersin.org/articles/10.3389/fpls.2025.1616020/fullplant identificationplant phenotypingmultimodal learningfusion automationmultimodal fusionarchitecture search
spellingShingle Alfreds Lapkovskis
Natalia Nefedova
Ali Beikmohammadi
Automatic fused multimodal deep learning for plant identification
Frontiers in Plant Science
plant identification
plant phenotyping
multimodal learning
fusion automation
multimodal fusion
architecture search
title Automatic fused multimodal deep learning for plant identification
title_full Automatic fused multimodal deep learning for plant identification
title_fullStr Automatic fused multimodal deep learning for plant identification
title_full_unstemmed Automatic fused multimodal deep learning for plant identification
title_short Automatic fused multimodal deep learning for plant identification
title_sort automatic fused multimodal deep learning for plant identification
topic plant identification
plant phenotyping
multimodal learning
fusion automation
multimodal fusion
architecture search
url https://www.frontiersin.org/articles/10.3389/fpls.2025.1616020/full
work_keys_str_mv AT alfredslapkovskis automaticfusedmultimodaldeeplearningforplantidentification
AT natalianefedova automaticfusedmultimodaldeeplearningforplantidentification
AT alibeikmohammadi automaticfusedmultimodaldeeplearningforplantidentification