Evaluating the method reproducibility of deep learning models in biodiversity research

Artificial intelligence (AI) is revolutionizing biodiversity research by enabling advanced data analysis, species identification, and habitats monitoring, thereby enhancing conservation efforts. Ensuring reproducibility in AI-driven biodiversity research is crucial for fostering transparency, verify...

Full description

Saved in:
Bibliographic Details
Main Authors: Waqas Ahmed, Vamsi Krishna Kommineni, Birgitta König-Ries, Jitendra Gaikwad, Luiz Gadelha, Sheeba Samuel
Format: Article
Language:English
Published: PeerJ Inc. 2025-02-01
Series:PeerJ Computer Science
Subjects:
Online Access:https://peerj.com/articles/cs-2618.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1825202050717188096
author Waqas Ahmed
Vamsi Krishna Kommineni
Birgitta König-Ries
Jitendra Gaikwad
Luiz Gadelha
Sheeba Samuel
author_facet Waqas Ahmed
Vamsi Krishna Kommineni
Birgitta König-Ries
Jitendra Gaikwad
Luiz Gadelha
Sheeba Samuel
author_sort Waqas Ahmed
collection DOAJ
description Artificial intelligence (AI) is revolutionizing biodiversity research by enabling advanced data analysis, species identification, and habitats monitoring, thereby enhancing conservation efforts. Ensuring reproducibility in AI-driven biodiversity research is crucial for fostering transparency, verifying results, and promoting the credibility of ecological findings. This study investigates the reproducibility of deep learning (DL) methods within the biodiversity research. We design a methodology for evaluating the reproducibility of biodiversity-related publications that employ DL techniques across three stages. We define ten variables essential for method reproducibility, divided into four categories: resource requirements, methodological information, uncontrolled randomness, and statistical considerations. These categories subsequently serve as the basis for defining different levels of reproducibility. We manually extract the availability of these variables from a curated dataset comprising 100 publications identified using the keywords provided by biodiversity experts. Our study shows that a dataset is shared in 50% of the publications; however, a significant number of the publications lack comprehensive information on deep learning methods, including details regarding randomness.
format Article
id doaj-art-37d4c9daf90d433fb5be94ac74565f41
institution Kabale University
issn 2376-5992
language English
publishDate 2025-02-01
publisher PeerJ Inc.
record_format Article
series PeerJ Computer Science
spelling doaj-art-37d4c9daf90d433fb5be94ac74565f412025-02-07T15:05:11ZengPeerJ Inc.PeerJ Computer Science2376-59922025-02-0111e261810.7717/peerj-cs.2618Evaluating the method reproducibility of deep learning models in biodiversity researchWaqas Ahmed0Vamsi Krishna Kommineni1Birgitta König-Ries2Jitendra Gaikwad3Luiz Gadelha4Sheeba Samuel5Heinz Nixdorf Chair of Distributed Information Systems, Friedrich-Schiller Universität Jena, Jena, Thuringia, GermanyHeinz Nixdorf Chair of Distributed Information Systems, Friedrich-Schiller Universität Jena, Jena, Thuringia, GermanyHeinz Nixdorf Chair of Distributed Information Systems, Friedrich-Schiller Universität Jena, Jena, Thuringia, GermanyHeinz Nixdorf Chair of Distributed Information Systems, Friedrich-Schiller Universität Jena, Jena, Thuringia, GermanyHeinz Nixdorf Chair of Distributed Information Systems, Friedrich-Schiller Universität Jena, Jena, Thuringia, GermanyHeinz Nixdorf Chair of Distributed Information Systems, Friedrich-Schiller Universität Jena, Jena, Thuringia, GermanyArtificial intelligence (AI) is revolutionizing biodiversity research by enabling advanced data analysis, species identification, and habitats monitoring, thereby enhancing conservation efforts. Ensuring reproducibility in AI-driven biodiversity research is crucial for fostering transparency, verifying results, and promoting the credibility of ecological findings. This study investigates the reproducibility of deep learning (DL) methods within the biodiversity research. We design a methodology for evaluating the reproducibility of biodiversity-related publications that employ DL techniques across three stages. We define ten variables essential for method reproducibility, divided into four categories: resource requirements, methodological information, uncontrolled randomness, and statistical considerations. These categories subsequently serve as the basis for defining different levels of reproducibility. We manually extract the availability of these variables from a curated dataset comprising 100 publications identified using the keywords provided by biodiversity experts. Our study shows that a dataset is shared in 50% of the publications; however, a significant number of the publications lack comprehensive information on deep learning methods, including details regarding randomness.https://peerj.com/articles/cs-2618.pdfReproducibilityDeep learningMetadataBiodiversity
spellingShingle Waqas Ahmed
Vamsi Krishna Kommineni
Birgitta König-Ries
Jitendra Gaikwad
Luiz Gadelha
Sheeba Samuel
Evaluating the method reproducibility of deep learning models in biodiversity research
PeerJ Computer Science
Reproducibility
Deep learning
Metadata
Biodiversity
title Evaluating the method reproducibility of deep learning models in biodiversity research
title_full Evaluating the method reproducibility of deep learning models in biodiversity research
title_fullStr Evaluating the method reproducibility of deep learning models in biodiversity research
title_full_unstemmed Evaluating the method reproducibility of deep learning models in biodiversity research
title_short Evaluating the method reproducibility of deep learning models in biodiversity research
title_sort evaluating the method reproducibility of deep learning models in biodiversity research
topic Reproducibility
Deep learning
Metadata
Biodiversity
url https://peerj.com/articles/cs-2618.pdf
work_keys_str_mv AT waqasahmed evaluatingthemethodreproducibilityofdeeplearningmodelsinbiodiversityresearch
AT vamsikrishnakommineni evaluatingthemethodreproducibilityofdeeplearningmodelsinbiodiversityresearch
AT birgittakonigries evaluatingthemethodreproducibilityofdeeplearningmodelsinbiodiversityresearch
AT jitendragaikwad evaluatingthemethodreproducibilityofdeeplearningmodelsinbiodiversityresearch
AT luizgadelha evaluatingthemethodreproducibilityofdeeplearningmodelsinbiodiversityresearch
AT sheebasamuel evaluatingthemethodreproducibilityofdeeplearningmodelsinbiodiversityresearch