An Improved Unmanned Aerial Vehicle Forest Fire Detection Model Based on YOLOv8

Forest fires have a great destructive impact on the Earth’s ecosystem; therefore, the top priority of current research is how to accurately and quickly monitor forest fires. Taking into account efficiency and cost-effectiveness, deep-learning-driven UAV remote sensing fire detection algorithms have...

Full description

Saved in:
Bibliographic Details
Main Authors: Bensheng Yun, Xiaohan Xu, Jie Zeng, Zhenyu Lin, Jing He, Qiaoling Dai
Format: Article
Language:English
Published: MDPI AG 2025-03-01
Series:Fire
Subjects:
Online Access:https://www.mdpi.com/2571-6255/8/4/138
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Forest fires have a great destructive impact on the Earth’s ecosystem; therefore, the top priority of current research is how to accurately and quickly monitor forest fires. Taking into account efficiency and cost-effectiveness, deep-learning-driven UAV remote sensing fire detection algorithms have emerged as a favored research trend and have seen extensive application. However, in the process of drone monitoring, fires often appear very small and are easily obstructed by trees, which greatly limits the amount of effective information that algorithms can extract. Meanwhile, considering the limitations of unmanned aerial vehicles, the algorithm model also needs to have lightweight characteristics. To address challenges such as the small targets, occlusions, and image blurriness in UAV-captured wildfire images, this paper proposes an improved UAV forest fire detection model based on YOLOv8. Firstly, we incorporate SPDConv modules, enhancing the YOLOv8 architecture and boosting its efficacy in dealing with minor objects and images with low resolution. Secondly, we introduce the C2f-PConv module, which effectively improves computational efficiency by reducing redundant calculations and memory access. Thirdly, the model boosts classification precision through the integration of a Mixed Local Channel Attention (MLCA) strategy preceding the three detection outputs. Finally, the W-IoU loss function is utilized, which adaptively modifies the weights for different target boxes within the loss computation, to efficiently address the difficulties associated with detecting small targets. The experimental results showed that the accuracy of our model increased by 2.17%, the recall increased by 5.5%, and the mAP@0.5 increased by 1.9%. In addition, the number of parameters decreased by 43.8%, with only 5.96M parameters, while the model size and GFlops decreased by 43.3% and 36.7%, respectively. Our model not only reduces the number of parameters and computational complexity, but also exhibits superior accuracy and effectiveness in UAV fire image recognition tasks, thereby offering a robust and reliable solution for UAV fire monitoring.
ISSN:2571-6255