In recent years, the escalating frequency and severity of wildfires globally have underscored the urgent need for innovative monitoring solutions. Traditional methods, such as manual patrols and tower-based surveillance, suffer from inefficiencies, slow response times, and limited coverage, especially in vast or rugged terrains. As a researcher focused on embedded systems and aerial robotics, I have explored the integration of embedded technology with UAV drones to address these challenges. This article presents a comprehensive study on designing a forest fire detection system using embedded UAV drones, leveraging real-time image processing with deep learning models like YOLOv8s. The system aims to enhance early detection capabilities, provide geolocation data, and support rapid emergency responses. Throughout this work, the term ‘UAV drones’ is emphasized to highlight their pivotal role in modern wildfire management.
The core of this research lies in the synergy between embedded systems and UAV drones. Embedded technology involves the co-design of hardware and software to perform dedicated functions with constraints like size, power, and reliability. In the context of UAV drones, embedded systems serve as the brain, controlling flight stability, sensor data acquisition, and communication. I have developed a system where a microcontroller-based flight controller manages a quadcopter UAV drone, equipped with visual sensors and wireless modules, to autonomously patrol forested areas. The embedded software, built on a real-time operating system, ensures efficient task scheduling and data handling. This approach allows UAV drones to operate in remote locations with minimal human intervention, making them ideal for large-scale fire monitoring.

To understand the system’s architecture, I begin with an overview of the embedded UAV drone’s design. A typical quadcopter UAV drone consists of airframe, propulsion system, flight control system, and remote control. The flight controller, centered on a microcontroller like AT32F435VGT7, integrates various sensors and communication interfaces. Below, Table 1 summarizes key hardware components used in my embedded UAV drone system, highlighting their functions and specifications. This design ensures that UAV drones can maintain stable flight while collecting high-quality video data for fire detection.
| Component | Function | Specifications/Model |
|---|---|---|
| Microcontroller | Core processor for control and data processing | AT32F435VGT7 (ARM Cortex-M4) |
| Inertial Measurement Unit (IMU) | Measures angular velocity and acceleration | QMI8658A (6-axis sensor) |
| Magnetometer | Provides heading reference | IST8310 (3-axis magnetometer) |
| GPS/GNSS Module | Enables geolocation tracking | Standard GPS receiver |
| Communication Module | Facilitates data transmission to ground station | LoRa and USB interfaces |
| Camera Sensor | Captures video for fire detection | HD visual sensor with real-time streaming |
The embedded software architecture is critical for the UAV drone’s performance. I implemented it using the Keil MDK5 development environment with RT-Thread real-time operating system. The software handles sensor data fusion, flight control algorithms, and communication protocols. For instance, the microcontroller communicates with the IMU via SPI interface to read attitude data, which is used for stability control. The control law for the quadcopter UAV drone can be expressed using a simplified dynamic model. The thrust generated by each motor is proportional to the PWM signal, and the overall attitude control relies on PID controllers. The error in roll angle $\phi$ is minimized by adjusting motor speeds, as shown in the equation:
$$ \tau_\phi = k_p \cdot e_\phi + k_i \cdot \int e_\phi \, dt + k_d \cdot \frac{de_\phi}{dt} $$
where $\tau_\phi$ is the control torque for roll, $e_\phi$ is the error between desired and actual roll angles, and $k_p$, $k_i$, $k_d$ are PID gains. This ensures that UAV drones can hover steadily even in mild wind conditions, which is essential for clear image capture. Additionally, the software manages data transmission, sending video streams and sensor readings to a ground control center via LoRa or other wireless links. This embedded setup allows UAV drones to operate autonomously for extended periods, making them cost-effective for continuous surveillance.
Moving to the fire detection methodology, I employ a computer vision approach based on the YOLOv8s model. UAV drones capture real-time video footage from forest areas, which is then processed onboard or transmitted for analysis. YOLOv8s is a state-of-the-art single-stage object detection algorithm that balances speed and accuracy, making it suitable for real-time applications on embedded UAV drones. The model architecture comprises a backbone for feature extraction, a neck for multi-scale feature fusion, and a head for output predictions. To enhance detection of small fire targets like early flames or smoke plumes, I modified the standard YOLOv8s by adding an extra detection layer for 4×4 pixel targets. This improves sensitivity without significantly increasing computational load, which is crucial for resource-constrained UAV drones. The modified model’s performance can be summarized using metrics like precision, recall, and mean Average Precision (mAP). For a detection output, the confidence score $C$ for a predicted bounding box is given by:
$$ C = P(object) \times IoU_{pred}^{truth} $$
where $P(object)$ is the probability that the box contains an object, and $IoU_{pred}^{truth}$ is the Intersection over Union between predicted and ground truth boxes. Higher confidence scores indicate more reliable detections, which is vital for minimizing false alarms in wildfire scenarios.
The training process for the YOLOv8s model involved a custom dataset of forest images with fire and smoke annotations. I used data augmentation techniques to address class imbalances and improve generalization. The dataset was split into 90% for training and 10% for validation. The model’s effectiveness is evaluated against other YOLO variants, as shown in Table 2 below. This comparison demonstrates that YOLOv8s achieves superior accuracy and mAP, making it ideal for deployment on UAV drones for wildfire detection.
| Model | Precision (%) | Recall (%) | mAP@50 (%) | mAP@50-95 (%) | Parameters | GFLOPs |
|---|---|---|---|---|---|---|
| YOLOv3s | 83.1 | 69.8 | 78.9 | 47.2 | 8,067,900 | 24.8 |
| YOLOv5s | 81.9 | 72.7 | 79.3 | 48.6 | 7,318,368 | 27.6 |
| YOLOv7s | 84.8 | 70.9 | 79.1 | 47.7 | 11,136,374 | 28.6 |
| YOLOv8s (modified) | 85.9 | 72.1 | 81.3 | 48.9 | 7,808,446 | 32.4 |
From Table 2, the modified YOLOv8s model achieves the highest precision (85.9%) and mAP@50 (81.3%), indicating its robustness in identifying fire-related objects from UAV drone footage. The recall rate of 72.1% shows a good balance between detection and false negatives. These metrics are critical for ensuring that UAV drones can reliably spot early signs of wildfires, even in complex environments like dense forests or mountainous regions. The parameter count and GFLOPs are manageable for embedded systems, allowing real-time inference on onboard processors or edge devices. This efficiency is key for UAV drones, which often have limited power and computational resources.
The overall fire detection algorithm workflow on UAV drones involves several steps. First, the UAV drone captures video frames during autonomous flight. Each frame is preprocessed (e.g., resizing, normalization) and fed into the YOLOv8s model for inference. The model outputs bounding boxes with confidence scores for fire and smoke classes. These detections are then filtered using a threshold (e.g., confidence > 0.5) to reduce false positives. For each confirmed detection, the system extracts geographic coordinates from the GPS module and timestamps the event. This data is transmitted to a ground station via wireless communication, enabling quick response teams to locate and assess the fire. The process can be summarized in a mathematical formulation: let $I_t$ be an image frame at time $t$, and $D(I_t)$ be the detection function. The output is a set of bounding boxes $B = \{b_1, b_2, …, b_n\}$ with associated classes and confidences. The system’s reliability $R$ over a mission duration $T$ can be expressed as:
$$ R = \frac{1}{T} \int_0^T \alpha \cdot P_{detect}(t) \, dt $$
where $\alpha$ is a weighting factor for environmental conditions, and $P_{detect}(t)$ is the probability of correct detection at time $t$. This highlights the importance of continuous monitoring by UAV drones to maximize early detection rates.
Experimental validation was conducted in simulated and real-world environments to test the embedded UAV drone system. I deployed multiple UAV drones in forested areas, programming them to follow predefined waypoints while streaming video to a base station. The detection algorithm ran on a companion computer onboard the UAV drone, though for resource efficiency, lighter models can be used. Results showed that the system could identify flames and smoke at distances up to 500 meters with high accuracy. In comparison tests, the modified YOLOv8s outperformed other models, particularly in detecting small targets, as evidenced by the higher mAP scores. For instance, in a scenario with low-contrast smoke against a cloudy sky, YOLOv8s achieved a confidence score of 0.71, while YOLOv5s only managed 0.53. This demonstrates the advantage of using advanced deep learning on UAV drones for nuanced fire detection.
To further analyze system performance, I derived key metrics using confusion matrix calculations. Let TP, FP, TN, FN represent true positives, false positives, true negatives, and false negatives, respectively. Precision and recall are computed as:
$$ Precision = \frac{TP}{TP + FP}, \quad Recall = \frac{TP}{TP + FN} $$
For the UAV drone system, average precision across multiple test runs was 0.859, indicating minimal false alarms. The F1-score, a harmonic mean of precision and recall, is given by:
$$ F1 = 2 \cdot \frac{Precision \cdot Recall}{Precision + Recall} $$
Our system achieved an F1-score of 0.785, showcasing a good balance between detection accuracy and completeness. These results underscore the effectiveness of embedding YOLOv8s into UAV drones for wildfire surveillance.
The integration of embedded technology also extends to power management and communication reliability. UAV drones are equipped with lithium-polymer batteries, and their flight time is a critical factor. I optimized the embedded software to reduce power consumption by putting non-essential components into sleep modes during idle periods. The energy consumption $E$ for a mission can be modeled as:
$$ E = P_{av} \cdot t_{flight} + E_{proc} $$
where $P_{av}$ is the average power draw from motors and sensors, $t_{flight}$ is flight duration, and $E_{proc}$ is energy used for processing. With efficient design, my UAV drones can achieve up to 30 minutes of flight time, sufficient for covering large areas. Communication modules like LoRa ensure long-range data transmission with low power, enabling real-time alerts even in remote locations where cellular networks are unavailable. This makes UAV drones versatile tools for fire monitoring in diverse terrains.
Looking ahead, future work will focus on enhancing the autonomy and robustness of UAV drones for wildfire detection. I plan to integrate obstacle avoidance systems using sensors like LiDAR or stereo cameras, allowing UAV drones to navigate safely in dense forests. Additionally, swarm coordination of multiple UAV drones could be implemented to expand coverage and redundancy. The control algorithm for a swarm can be described using potential fields or consensus protocols, where each UAV drone adjusts its position based on neighbors’ states. For example, the desired velocity $v_d$ for a UAV drone in a swarm might be:
$$ v_d = \sum_{j \in N_i} f( \| x_i – x_j \| ) \cdot (x_j – x_i) $$
where $N_i$ is the set of neighboring UAV drones, $x_i$ and $x_j$ are positions, and $f$ is a repulsive/attractive function. This would enable collaborative monitoring, improving detection probability and response times. Furthermore, incorporating weather data and predictive analytics could help UAV drones anticipate fire spread, providing valuable insights for firefighters.
In conclusion, this study presents a robust framework for using embedded UAV drones in wildfire detection. By combining efficient hardware design, real-time software, and advanced deep learning models like YOLOv8s, the system achieves high detection accuracy and reliability. The embedded technology allows UAV drones to operate autonomously in challenging environments, transmitting critical data to support emergency responses. As wildfires continue to pose global threats, the adoption of UAV drones equipped with embedded systems offers a scalable and cost-effective solution. Future advancements in AI and robotics will further elevate the capabilities of UAV drones, making them indispensable tools in forest conservation and disaster management.
Throughout this research, the term ‘UAV drones’ has been emphasized to reflect their integral role. From hardware components to algorithmic processing, every aspect is tailored to enhance the performance of UAV drones in fire surveillance. I believe that continued innovation in embedded systems will drive the next generation of smart UAV drones, capable of not only detecting fires but also preventing them through proactive monitoring. The journey of integrating technology with environmental stewardship is ongoing, and UAV drones stand at the forefront of this endeavor.
