Forest fires pose a severe and recurring threat to the invaluable natural resource that is our forest ecosystems. Traditional forest fire monitoring methods, such as ground patrols, watchtower observations, and satellite remote sensing, have played a role in fire prevention and control. However, they exhibit significant limitations in terms of timeliness, accuracy, and flexibility, particularly in the critical early stages of a fire’s development when intervention is most effective. Consequently, the exploration of an efficient, accurate, and flexible method for early forest fire monitoring and warning is of paramount importance. The evolution of Unmanned Aerial Vehicle (UAV) technology has unveiled substantial potential for its application in the early detection of forest fire smoke. This article details a comprehensive monitoring and warning methodology centered on UAV drone aerial surveying.

Effective forest fire monitoring and early warning systems must guarantee an exceptionally high alarm rate coupled with an extremely low false alarm rate. A critical challenge for existing systems, including some modern computer vision approaches, is their vulnerability to environmental variables. Specifically, variations in lighting conditions and the persistent obstruction caused by mountain clouds and mist can severely degrade image quality. These factors blur critical feature information within captured images, such as the subtle color and texture signatures of early smoke, leading to a substantial reduction in the accuracy of warning results. The proposed method directly addresses these challenges by leveraging UAV drone technology and advanced image processing to enhance image clarity prior to analysis.
1. Methodology for Early Forest Fire Smoke Monitoring and Warning
1.1 Acquisition of Smoke Images via UAV Drone Aerial Survey
UAV drone aerial survey technology represents an advanced form of aerial photogrammetry, distinguished by its high precision, efficiency, low operational cost, and remarkable flexibility. By equipping UAV drones with high-precision sensors like high-resolution cameras and infrared thermal imagers, comprehensive monitoring of forested areas is achievable even under complex terrain and adverse weather conditions. For early smoke detection, the design of the UAV drone’s flight path is crucial, directly impacting monitoring efficiency and accuracy. A rationally planned flight path ensures the UAV drone can swiftly and accurately capture the nascent signs of fire smoke.
The technical process involves mounting survey imaging systems and relevant software engines onto the UAV drone platform. The UAV drone then navigates according to pre-set flight paths, continuously capturing extensive image data during its mission. These geotagged images provide precise positioning information, enabling accurate mapping of a region’s features onto a coordinate system. This capability allows the UAV drone to rapidly reach fire sites or potential risk areas, acquire real-time imagery and data, and transmit this information wirelessly to a ground control center, thereby buying invaluable time for firefighting response.
The configuration and flight parameters of the UAV drone system are critical for successful data acquisition. A representative setup for such a mission is detailed in the table below:
| Parameter Name | Value |
|---|---|
| UAV Platform | Hexacopter (e.g., MX-600) |
| Flight Altitude | 150 m |
| Flight Speed | 12 m/s |
| Camera Resolution | 5472 × 3648 pixels |
| Number of Flight Lines | 20 |
| Line Spacing | 80 m |
| Forward Overlap | 70% |
| Side Overlap | 40% |
1.2 Image Pre-processing: Illumination Adjustment and Cloud/Fog Removal
Images captured by UAV drones are often degraded by variable lighting and atmospheric haze, leading to reduced contrast and loss of detail. To counteract this, a two-stage pre-processing pipeline is employed: adaptive illumination correction followed by haze removal.
1.2.1 Adaptive Illumination Correction
Traditional gamma correction applies a fixed value, which is suboptimal for images captured under varying natural light. Adaptive Gamma Correction (AGC) dynamically adjusts the gamma value based on the image’s intrinsic characteristics and ambient conditions. This technique more effectively resolves issues related to brightness and contrast caused by environmental changes. The correction coefficient \(\gamma\) and the output image \(A(x, y)\) are calculated as follows:
First, the average pixel intensity \(\bar{a}\) of the input image \(a(x, y)\) is computed. The adaptive gamma parameter \(\gamma\) is then determined based on this average:
$$
\gamma = \begin{cases}
\frac{1}{1 + \frac{1 – (b-\bar{a})}{255} \cos\left(\frac{\pi a(x,y)}{255}\right)}, & \frac{\bar{a}}{255} \leq 0.5 \\
\frac{1}{1 – b + \frac{1 – (b-\bar{a})}{255} \cos\left(\frac{\pi a(x,y)}{255}\right)}, & \frac{\bar{a}}{255} > 0.5
\end{cases}
$$
Where \(b\) is a parameter that adjusts the range of the gamma function, typically set between 0 and 1. The corrected image \(A(x, y)\) is obtained by applying this adaptive gamma to the original image \(a(x, y)\):
$$
A(x, y) = 255 \left( \frac{a(x, y)}{255} \right)^\gamma
$$
This process significantly enhances overall brightness while preserving contrast, revealing details in darker regions without over-saturating bright areas.
1.2.2 Haze and Cloud Removal
Following illumination correction, the Dark Channel Prior (DCP) theory is applied to remove atmospheric haze and thin clouds. The DCP is based on the observation that in most non-sky patches within a haze-free outdoor image, at least one color channel has very low intensity at some pixels. For a haze-affected image \(A(x, y)\), the recovered haze-free image \(J(x, y)\) can be estimated using the atmospheric scattering model:
$$
J(x, y) = \frac{A(x, y) – B}{D} + B
$$
In this simplified representation for clarity, \(B\) represents the global atmospheric light (the haze color), and \(D\) is the medium transmission map describing the portion of light that reaches the camera without being scattered. In practice, \(B\) is estimated from the brightest pixels in the dark channel of \(A(x, y)\), and \(D\) is derived using the dark channel prior formula: \(D(x) = 1 – \omega \min_{c \in \{R,G,B\}} \left( \min_{y \in \Omega(x)} \left( \frac{A^c(y)}{B^c} \right) \right)\), where \(\omega\) is a constant parameter. This step effectively removes veiling haze, restoring color saturation and contrast, which is vital for clear smoke visualization.
1.3 Feature Extraction from Smoke Images
To enable automatic detection, distinctive features must be extracted from the pre-processed images. Smoke exhibits characteristic color and texture patterns which differentiate it from natural backgrounds like clouds, mist, or tree canopies.
1.3.1 Color Feature Extraction
Smoke typically appears as grayish-white or light gray, distinct from common forest colors. Color moments provide a robust statistical description of color distribution. For each color channel \(i\) (e.g., in RGB or other suitable color spaces), the first three moments—mean, standard deviation, and skewness—are calculated. For an image with \(N\) pixels, where \(d_{ij}\) is the value of the \(i\)-th color component at the \(j\)-th pixel, the moments are:
$$
\begin{aligned}
g1_i &= \frac{1}{N} \sum_{j=1}^{N} d_{ij} \quad &\text{(Mean)} \\
g2_i &= \left( \frac{1}{N} \sum_{j=1}^{N} (d_{ij} – g1_i)^2 \right)^{\frac{1}{2}} \quad &\text{(Standard Deviation)} \\
g3_i &= \left( \frac{1}{N} \sum_{j=1}^{N} (d_{ij} – g1_i)^3 \right)^{\frac{1}{3}} \quad &\text{(Skewness)}
\end{aligned}
$$
These moments form a compact color feature vector insensitive to small changes in viewpoint and resolution.
1.3.2 Texture Feature Extraction
Smoke possesses a fuzzy, semi-transparent, and dynamically changing texture. Local Binary Pattern (LBP) is an effective operator for texture classification that is invariant to monotonic gray-scale changes. For a central pixel \(I_o\) at \((x_0, y_0)\) with intensity \(I_o\), and its \(S\) surrounding neighbor pixels with intensities \(I_s\), the LBP code is computed by thresholding the neighbors:
$$
f(I_s – I_o) = \begin{cases}
1, & I_s – I_o \geq 0 \\
0, & I_s – I_o < 0
\end{cases}
$$
The decimal LBP value for the central pixel is then calculated by summing the thresholded values weighted by powers of two:
$$
E(x_0, y_0) = \sum_{s=1}^{S} f(I_s – I_o) \cdot 2^{s}
$$
A histogram of these LBP codes across an image region serves as a powerful texture descriptor for smoke, capturing its unique local patterns.
1.4 Smoke Recognition and Early Warning Trigger
With features extracted, a machine learning classifier is trained to distinguish smoke from non-smoke images. A Bayesian network classifier is employed for this task due to its probabilistic foundation and ability to handle uncertainty.
Let \(X = \{ g1_i, g2_i, g3_i, \text{LBP\_hist} \}\) represent the combined feature vector extracted from an image. Let \(v_k\) denote the class label, where \(k \in \{\text{smoke}, \text{non-smoke}\}\). The goal of the Bayesian classifier is to assign the image to the class with the highest posterior probability \(P(v_k | X)\). Using Bayes’ theorem:
$$
P(v_k | X) = \frac{P(X | v_k) \cdot P(v_k)}{P(X)}
$$
Where:
- \(P(v_k)\) is the prior probability of class \(v_k\), estimated from the training dataset frequencies. If the training set contains \(n\) smoke samples and \(m\) non-smoke samples, then \(P(\text{smoke}) = \frac{n}{n+m}\).
- \(P(X | v_k)\) is the likelihood, the probability of observing feature vector \(X\) given class \(v_k\). This is learned from the training data, often modeled using Gaussian distributions or kernel density estimation for continuous features.
- \(P(X)\) is the evidence, a normalizing constant which is the same for all classes.
For classification, the evidence can be ignored, and the decision rule becomes:
$$
\text{Predicted Class} = \arg \max_{v_k} \left[ P(X | v_k) \cdot P(v_k) \right]
$$
The classifier is trained on a large, labeled dataset containing both smoke and non-smoke images captured by the UAV drone under various conditions. Once trained, for each new image or video frame streamed from the UAV drone, the system extracts its feature vector \(X\), computes the posterior probabilities for both classes using the learned likelihoods and priors, and assigns the image to the class with the highest score. A “smoke” classification triggers the early warning mechanism, which immediately alerts the ground control center with details including timestamp, geolocation from the UAV drone’s GPS, and the confidence level of the detection.
2. Analysis and Performance Evaluation
The proposed method’s efficacy was evaluated through practical testing and comparative analysis. The UAV drone system, configured as previously described, was deployed over forested test areas. The pre-processing stages successfully enhanced image clarity, making smoke plumes more distinct against the background. Feature extraction yielded quantifiable descriptors; for example, color moments for a typical smoke region might show specific patterns in the blue and gray channels, while the LBP histogram would exhibit a distribution characteristic of fuzzy textures.
The most critical evaluation metrics for an early warning system are the False Alarm Rate (FAR) and the Missed Alarm Rate (MAR). A low FAR is essential to prevent wasting resources and causing alarm fatigue, while a low MAR is critical for ensuring actual fires are not overlooked. The performance of the proposed UAV drone-based Bayesian method was compared against two other contemporary approaches: a method based on an improved YOLOv5s deep learning model and a method combining traditional image processing with knowledge graph reasoning.
The comparative results are summarized below:
| Performance Metric | Proposed UAV-based Bayesian Method | Improved YOLOv5s Method | Knowledge Graph + Image Processing Method |
|---|---|---|---|
| False Alarm Rate (FAR) | 1.2% | 3.8% | 5.1% |
| Missed Alarm Rate (MAR) | 0.8% | 2.5% | 4.3% |
The results clearly demonstrate the superiority of the proposed method. Its FAR and MAR are significantly lower than those of the two traditional methods. This indicates that the combination of UAV drone-acquired imagery, robust pre-processing for illumination and haze, carefully engineered color and texture features, and a probabilistic Bayesian classifier creates a system with higher discriminative accuracy. It is more adept at correctly identifying true smoke signals while rejecting similar-looking non-smoke phenomena, a crucial capability for reliable early warning. The mobility and perspective of the UAV drone allow it to capture optimal imagery, while the tailored image processing and classification pipeline ensure precise analysis of that imagery.
3. Conclusion
The early forest fire smoke monitoring and warning method based on UAV drone aerial survey represents a significant advancement in proactive forest protection. By harnessing the high mobility, flexibility, and broad field-of-view of UAV drones, the system enables rapid and comprehensive surveillance of forested areas, capable of capturing the initial, often subtle, signatures of smoke. The integration of adaptive image pre-processing techniques specifically designed to counteract environmental degradations like variable light and atmospheric haze ensures that the input data is of high quality. Subsequent extraction of discriminative color and texture features, followed by classification using a robust Bayesian network, results in a system with high accuracy, low false alarms, and low missed detections.
In practical application, this methodology offers substantial benefits. It enhances the efficiency and reliability of forest fire monitoring, providing decision-makers with timely and trustworthy early warnings. This capability is instrumental in facilitating rapid firefighting response and evacuation procedures, thereby helping to minimize ecological damage, economic loss, and risk to human life. The system underscores the practical value of integrating UAV drone technology with intelligent image analysis for safeguarding forest resources and supporting sustainable forest management practices.
