Real-Time Forest Fire Monitoring System for China UAV Drones Using Adaptive Wavelet Analysis

In recent years, forest fires have posed a significant threat to ecosystems, biodiversity, and human safety globally. The complexity of forest environments, characterized by dense vegetation, rugged terrain, and rapidly changing weather conditions, makes it challenging for ground firefighters to assess flame spread direction and speed accurately. This often leads to delayed responses, reduced efficiency in firefighting operations, and heightened risks to personnel. Traditional monitoring methods, such as satellite imagery or ground-based sensors, suffer from limitations like low temporal resolution, high latency, or limited coverage. To address these issues, unmanned aerial vehicles (UAVs), or drones, have emerged as a transformative tool for real-time forest fire surveillance. In China, the integration of UAV technology into disaster management has gained momentum, driven by advancements in robotics, sensors, and edge computing. China UAV drones offer advantages such as flexibility, cost-effectiveness, and the ability to access remote areas, making them ideal for fire monitoring missions. However, existing UAV-based systems often rely on basic image processing techniques that lack real-time analysis of fire dynamics, such as flame spread trends and intensity levels. This gap underscores the need for intelligent edge systems capable of processing data onboard to provide immediate insights.

This article presents a novel real-time forest fire monitoring system designed for China UAV drones, leveraging an adaptive wavelet analysis algorithm to extract dynamic flame edges and assess fire behavior. The system is implemented on a field-programmable gate array (FPGA) platform, enabling high-speed video acquisition, processing, and wireless transmission without relying on cloud resources. By combining adaptive wavelet transforms with stereo vision for spatial localization, the system outputs critical information like fire location, spread direction, velocity, and intensity等级. These data are transmitted via LoRa communication to command centers and firefighters, facilitating timely decision-making. The focus on China UAV drones highlights the growing role of domestic technology in addressing environmental challenges, with applications spanning forestry, agriculture, and public safety. Below, we detail the algorithmic foundations, system architecture, and experimental validation, emphasizing the integration of adaptive wavelet methods for enhanced edge detection in fire imagery.

The core of our system lies in the adaptive wavelet-based fire monitoring algorithm, which addresses the limitations of conventional edge detection techniques. Forest fire images contain both color and dynamic information; while color cues help in fire detection, dynamic edges are crucial for analyzing flame movement. Traditional operators like Sobel, Prewitt, or Canny are sensitive to noise and may produce discontinuous edges, especially in smoky or cluttered environments. Wavelet transforms, by contrast, offer multi-resolution analysis, allowing for noise suppression and detail preservation across scales. We employ a lifting scheme for wavelet decomposition due to its computational efficiency, making it suitable for real-time implementation on resource-constrained platforms like FPGAs. The 5/3 lifting wavelet is chosen for its balance between complexity and performance. Given an input image, we first convert it to the YCbCr color space to separate luminance (Y) from chrominance (Cb, Cr), as fire regions often exhibit distinct intensity and color characteristics. The luminance component is then processed using the lifting scheme, as described below.

Let $x(n)$ represent the luminance signal of a row or column in the image. The 5/3 lifting wavelet transform involves two steps: prediction and update. For even and odd indexed samples, the high-frequency coefficients (detail) $c(n)$ and low-frequency coefficients (approximation) $d(n)$ are computed as:

$$c(2n+1) = x(2n+1) – \left\lfloor \frac{x(2n) + x(2n+2)}{2} \right\rfloor$$

$$d(2n) = x(2n) + \left\lfloor \frac{c(2n-1) + c(2n+1)}{4} \right\rfloor$$

These equations are applied iteratively over multiple decomposition levels to obtain wavelet coefficients at different scales. However, a fixed threshold for coefficient selection can lead to either over-smoothing or noise retention. To adapt to varying image conditions—such as contrast changes due to smoke or illumination—we propose an adaptive thresholding mechanism. For each sub-band (horizontal, vertical, and diagonal) at decomposition level $i$, we divide the coefficients into blocks and compute three local features: the decomposition level $i$, the local contrast $\lambda_{ij}$, and the median of absolute high-frequency coefficients $N_{ij}$. Here, $j = 1, 2, 3$ corresponds to the three directional sub-bands. The local contrast is defined as $\lambda_{ij} = \sigma_{ij} / \mu_{ij}$, where $\mu_{ij}$ and $\sigma_{ij}$ are the mean and standard deviation of coefficients in block $j$ at level $i$, respectively. The adaptive threshold $T_{ij}$ is then given by:

$$T_{ij} = \frac{\lambda_{ij} \cdot N_{ij}}{2^{i-1}}$$

This formulation ensures that thresholds are higher for noisy, low-contrast regions (suppressing noise) and lower for high-contrast edges (preserving details). Coefficients below $T_{ij}$ are set to zero, while others are retained for edge reconstruction. After inverse wavelet transform, we obtain a refined edge map highlighting flame boundaries. This adaptive approach is particularly effective for China UAV drones operating in diverse forest environments, where lighting and smoke conditions can vary rapidly.

To assess fire behavior, we analyze dynamic edge information across consecutive frames. The system captures video at a fixed frame rate using stereo cameras mounted on the China UAV drone. For spatial localization, we employ binocular vision to compute the 3D coordinates of flame points. Let $P$ be a point in world coordinates $(x, y, z)$, and let its projections on the left and right image planes be $(u_L, v_L)$ and $(u_R, v_R)$, respectively. The projection matrices $M_L$ and $M_R$ for the left and right cameras are obtained through calibration. The relationship is expressed as:

$$Z^c_L \begin{bmatrix} u_L \\ v_L \\ 1 \end{bmatrix} = M_L \begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix}, \quad Z^c_R \begin{bmatrix} u_R \\ v_R \\ 1 \end{bmatrix} = M_R \begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix}$$

Expanding these equations yields a linear system that can be solved for $(x, y, z)$. For fire monitoring, we select key points such as the flame tip or centroid from the edge map and compute their 3D positions. By tracking these points over time, we estimate the flame spread velocity $V$ and direction. Suppose at frame $n_1$, a flame edge point has coordinates $(x_1, y_1, z_1)$, and at frame $n_2$, it moves to $(x_2, y_2, z_2)$. The horizontal spread velocity is calculated as:

$$V = \frac{\sqrt{(x_2 – x_1)^2 + (y_2 – y_1)^2}}{\Delta t}$$

where $\Delta t$ is the time interval between frames. Based on $V$, we categorize fire intensity into three levels: low ($V < 0.15 \, \text{m/s}$), medium ($0.15 \leq V < 0.5 \, \text{m/s}$), and high ($V \geq 0.5 \, \text{m/s}$). This classification helps in prioritizing responses; for instance, high-intensity fires warrant immediate alerts to firefighters in the path. The spread direction is determined by analyzing vector fields of edge movements across small regions (e.g., $m \times n$ blocks). If multiple blocks indicate movement towards a specific compass direction, the system flags it as the primary spread trend. This information, combined with GPS data from the China UAV drone, provides actionable insights for ground teams.

The hardware implementation of our system centers on an FPGA platform, chosen for its parallelism, low latency, and power efficiency—essential for UAV applications. The block diagram includes modules for video capture, image preprocessing, wavelet processing, fire analysis, and wireless communication. A Xilinx FPGA serves as the main controller, interfacing with SDI industrial cameras for video input. These cameras are configured to stream RGB video at 1080p resolution, which is converted to YCbCr format onboard. The luminance channel (Y) is fed into the adaptive wavelet module, implemented using hardware description language (HDL) to exploit pipelining and parallel processing. Coefficients are stored in external DDR memory for multi-level decomposition. The thresholding logic dynamically adjusts based on real-time statistics, ensuring robust edge extraction even under varying conditions.

For wireless data transmission, we integrate a LoRa module operating in the 433 MHz band, offering long-range communication up to 6 km—suited for vast forest areas. The FPGA packages fire data into frames according to a custom protocol, as summarized in Table 1. Two frame types are defined: Type I for uplink (drone to ground) and Type II for downlink (ground to drone). Each frame includes a header, data payload, and error code. The payload in Type I frames contains fields for longitude, latitude, battery voltage, fire intensity level, and spread direction. For example, fire intensity is encoded as a 4-byte value (0x01 to 0x08), while spread direction uses 8 bytes to indicate trends (e.g., 0x01 for north, 0x00 for no spread). This compact format minimizes latency and power consumption, critical for China UAV drones with limited onboard energy.

Data Type Header Data Payload Error Code
I (Uplink) 0x55, 0xAA, 0x00, Drone ID Longitude, Latitude, Battery, Fire Level, Spread Direction Fault Indicator
II (Downlink) 0x55, 0xAA, 0x01, Drone ID Control Command Emergency Code

The ground station, typically a laptop or mobile device, receives these frames via a LoRa receiver and decodes them for visualization on a map interface. Alerts are generated if fire intensity is high or if firefighters are in the spread path. The system also supports drone control through Type II frames, enabling commands like return-to-home or emergency landing—features vital for safety in hostile fire environments. The integration of FPGA and LoRa ensures that the China UAV drone operates autonomously at the edge, reducing reliance on continuous connectivity and enabling real-time monitoring even in remote regions.

To validate our system, we conducted experiments using simulated forest fire scenarios in controlled indoor settings. A quadcopter China UAV drone equipped with stereo cameras was flown over a mock forest setup with propane burners simulating flames. Video sequences were captured at 30 fps, and frames were processed offline on the FPGA emulator to verify real-time performance. We extracted three consecutive frames from a sample video, as shown in Figure 7(b), and applied the adaptive wavelet algorithm. The resulting edge maps, depicted in Figure 8(a), demonstrate clear flame contours with minimal noise. Compared to traditional Canny edge detection, our method produced more continuous edges, especially in smoky regions, thanks to the adaptive thresholding.

For spatial localization, we selected five key points: the flame tip, leftmost and rightmost edges, and centroid. Using calibrated camera parameters, we computed their 3D coordinates via the binocular vision model. Table 2 lists the coordinates for one frame, illustrating the precision achieved. The estimated error was within 0.2 meters, sufficient for fire spread assessment.

Point X (m) Y (m) Z (m)
Tip 10.25 5.67 2.31
Left Edge 9.89 5.12 2.28
Right Edge 10.61 5.95 2.33
Centroid 10.15 5.58 2.30

By tracking these points across frames, we calculated spread velocity and direction. For instance, the right edge moved from (10.61, 5.95) to (10.78, 6.10) over 0.5 seconds, yielding $V = 0.34 \, \text{m/s}$ (medium intensity) and a direction of approximately 30° northeast. The system successfully identified this trend and flagged hypothetical firefighters in that sector, sending alerts via LoRa. The entire processing pipeline, from video capture to data transmission, operated at less than 50 ms per frame, meeting real-time requirements for China UAV drone applications.

In terms of resource utilization on the FPGA, the adaptive wavelet module consumed 15% of logic slices and 20% of block RAM, leaving ample room for other functions. The LoRa communication added minimal overhead, with data frames transmitted every second. Field tests in a forest reserve further confirmed the system’s robustness against environmental factors like wind and uneven lighting. The China UAV drone maintained stable flight while streaming fire data to a ground station 3 km away, demonstrating the practicality of our approach for large-scale monitoring.

In conclusion, this article presents a comprehensive real-time forest fire monitoring system tailored for China UAV drones, integrating adaptive wavelet analysis for dynamic flame edge extraction and stereo vision for spatial localization. The algorithm’s ability to adjust thresholds based on local image features enhances edge detection accuracy in challenging conditions, while the FPGA implementation ensures low-latency processing suitable for edge computing. By providing real-time data on fire location, spread velocity, direction, and intensity等级, the system empowers firefighters and command centers to make informed decisions, potentially reducing response times and mitigating risks. The use of LoRa for communication extends operational range in remote forests, aligning with the growing adoption of China UAV drones in environmental monitoring. Future work may explore deep learning enhancements for fire detection and predictive modeling of spread patterns. Nonetheless, this system represents a significant step towards intelligent, autonomous fire surveillance, contributing to forest conservation efforts in China and beyond.

Scroll to Top