Research on Multi-UAV Drone Target Collaborative Search Control under Dynamic Bayesian Network

In the field of modern aerial robotics, the collaborative search and control of multiple unmanned aerial vehicles (UAV drones) have garnered significant attention due to their potential in applications such as surveillance, disaster response, and environmental monitoring. As a researcher deeply involved in this domain, I have explored the integration of dynamic Bayesian networks to enhance the efficiency and reliability of multi-UAV drone target search operations. This article delves into the methodologies, experimental analyses, and outcomes of our approach, emphasizing the use of formulas and tables to summarize key aspects. The core objective is to address challenges like target loss in sparse-feature environments and collision risks during collaborative searches by leveraging probabilistic modeling and adaptive control strategies.

The increasing deployment of UAV drones in complex scenarios necessitates advanced coordination mechanisms. Traditional single-UAV drone systems often struggle with coverage limitations and environmental uncertainties, leading to inefficiencies in target detection and tracking. By employing a fleet of UAV drones, we can distribute tasks, optimize resource utilization, and improve search accuracy through shared information and synchronized actions. However, this introduces complexities in task allocation, path planning, and real-time control, especially in dynamic settings where targets move and environments change. Our research focuses on a dynamic Bayesian network-based framework to mitigate these issues, enabling robust target localization and collaborative search control for multiple UAV drones. The integration of high-resolution imaging, motion compensation algorithms, and probabilistic inference forms the backbone of our methodology, as detailed in the following sections.

To begin with, the localization of targets using UAV drones is critical for successful search missions. We equip each UAV drone with high-resolution image sensors to capture target images in real-time. The initial image acquisition can be represented as:

$$I(u,v) = \kappa_c(u,v) f,$$

where $\kappa_c$ denotes the image capture coefficient, $f$ is the focal length, and $(u,v)$ represents the pixel coordinates of the target. Due to the motion of both the UAV drone and the target, images often suffer from motion blur, which we compensate for using the Lucy-Richardson algorithm. This iterative deconvolution technique estimates the original sharp image by modeling the blur as a convolution with a known kernel. The compensation process is quantified as:

$$I_c(u,v) = \frac{M^T \cdot I(u,v)}{(M \cdot I_k(u,v)) \otimes M^T \oplus I_k(u,v)},$$

where $M$ is the blur kernel matrix, $M^T$ is its transpose, $I_k(u,v)$ is the estimated image at iteration $k$, and $\otimes$ and $\oplus$ denote convolution and element-wise multiplication, respectively. After compensation, we apply frequency-domain filtering and histogram equalization to enhance image quality, resulting in a processed image $I'(u,v)$. This preprocessing ensures that subsequent feature extraction is accurate, which is vital for the UAV drone’s target detection capabilities.

Next, we construct a dynamic Bayesian network to model the temporal evolution of target features. The network incorporates hidden state variables $W$ and observation variables $Y = I'(u,v)$, with learning parameters defined by initial state probabilities $P(w_0)$, state transition probabilities $P(w_q | w_{q-1})$, and observation probabilities $P(Y | w_q)$. The joint probability over time steps $q = 0$ to $Q-1$ is given by:

$$P(Y) = \prod_{q=1}^{Q-1} P(w_q | w_{q-1}) \prod_{q=0}^{Q-1} P(Y | w_q) P(w_0).$$

From this network, we extract shape and movement features of targets. The shape feature $\tau_B$ combines compactness and aspect ratio:

$$\tau_B = \left( \frac{4\pi \times S_m}{C_m} \right) \otimes \left( \frac{x_{\text{max}} – x_{\text{min}}}{y_{\text{max}} – y_{\text{min}}} \right),$$

where $S_m$ is the target area, $C_m$ is the perimeter, and $(x_{\text{min}}, x_{\text{max}}, y_{\text{min}}, y_{\text{max}})$ are the bounding box coordinates. The overall image feature $\tau_I$ is derived by adding shape features and prior information $P(I)$. Movement features $\tau_{\text{move}}$ are obtained by comparing features between consecutive images:

$$\tau_{\text{move}} = \tau_I – \tau_{I-1}.$$

Feature matching is then performed to localize targets. The matching score $s$ is computed as:

$$s = \frac{\tau_I \cdot \tau_B}{\|\tau_I\| \cdot \|\tau_B\|},$$

and if $s$ exceeds a threshold $s_0$, the target is considered present. This approach allows each UAV drone to accurately identify and track multiple targets, even in feature-sparse environments, by continuously updating the dynamic Bayesian network with new observations. The robustness of this method is key for collaborative operations involving multiple UAV drones.

Once targets are localized, we proceed to multi-UAV drone collaborative search path planning. We construct a grid map based on target positions and environmental image data. The grid is indexed by coordinates, with each cell assigned a unique identifier $h$:

$$h = x + (y-1) \times \frac{n_x n_y}{l_x l_y},$$

where $n_x$ and $n_y$ are the number of grid cells horizontally and vertically, and $l_x$ and $l_y$ are cell dimensions. The grid state $z_h(t)$ is updated over time $t$ to reflect environmental changes:

$$z_h(t) = h(1-\mu) \cdot z_h(t-1) – 1 + \mu \cdot s \cdot z_{\text{UAV}}(t),$$

with $\mu$ as the receptive field coverage and $z_{\text{UAV}}(t)$ as the UAV drone’s sensed features. This dynamic mapping enables adaptive path planning for the fleet of UAV drones.

Target allocation among UAV drones is optimized using priority calculations. The probability $P(i,j)$ that target $j$ is within the search radius $R_{\text{UAV}}$ of UAV drone $i$ is:

$$P(i,j) = \begin{cases}
1, & \text{if } \sqrt{(x_{\text{center}}(i) – x_{\text{target}}(j))^2 + (y_{\text{center}}(i) – y_{\text{target}}(j))^2} \leq R_{\text{UAV}} \\
0, & \text{otherwise}
\end{cases},$$

where $(x_{\text{center}}(i), y_{\text{center}}(i))$ is the center of UAV drone $i$’s field of view, and $(x_{\text{target}}(j), y_{\text{target}}(j))$ is target $j$’s location. The search priority $\chi$ for each target considers urgency $\delta(i)$, value $\psi(i)$, distance $d(i)$, and mobility $E(i)$:

$$\chi = \omega_1 \cdot \delta(i) + \omega_2 \cdot \psi(i) + \omega_3 \cdot d(i) + \omega_4 \cdot E(i), \quad \text{for } P(i,j)=1,$$

with weight factors $\omega_1, \omega_2, \omega_3, \omega_4$. The allocation probability $P_{ij}$ is then computed as:

$$P_{ij} = \frac{z_h P_{i1}}{\sum_{k=1}^N P_{k1}} \cdot \frac{\chi}{C_{ij}^t},$$

where $P_{i1}$ is the target update probability, and $C_{ij}^t$ is the time cost for the UAV drone to reach the target. This prioritization ensures efficient task distribution, minimizing overlaps and maximizing coverage for the multi-UAV drone system.

Path planning uses allocated targets as key waypoints. The initial search route $L_0$ from the UAV drone’s starting position $(x_{\text{UAV-0}}, y_{\text{UAV-0}})$ to target $j$ is:

$$L_0 = P_{ij} \frac{(y_{\text{target}}(j) – y_{\text{UAV-0}})(x – x_{\text{UAV-0}})}{x_{\text{target}}(j) – x_{\text{UAV-0}}} + y_{\text{UAV-0}}.$$

For multiple targets, segments are connected sequentially based on proximity, forming a continuous path. This collaborative route planning reduces redundancy and enhances the search efficiency of the UAV drone fleet.

The final step involves multi-UAV drone target collaborative search control implementation. We design controllers to adjust flight trajectories and avoid collisions. The distance between any two UAV drone path nodes is calculated as:

$$d(l(i), l(j)) = \sqrt{(l_x(i) – l_x(j))^2 + (l_y(i) – l_y(j))^2},$$

and if $d(l(i), l(j)) = 0$, indicating a collision risk, the altitude is adjusted by $\Delta h$ from the initial height $h_0$:

$$z_{\text{adjust}} = h_0 + \Delta h.$$

Flight attitude control, such as yaw angle adjustment, is governed by:

$$K_\theta = g_\theta \left( \frac{y(t+1) – y(t)}{x(t+1) – x(t)} – \theta(t) \right),$$

where $g_\theta$ is the control gain, and $\theta(t)$ is the current yaw angle. Similarly, roll and pitch angles are controlled to ensure stable flight. The overall motion update for a UAV drone is:

$$\begin{aligned}
x_m(t+1) &= \tau_{\text{move}} \cdot x_m(t) \\
y_m(t+1) &= \tau_{\text{move}} \cdot y(t) \\
z_m(t+1) &= \tau_{\text{move}} \cdot z_{\text{adjust}}(t)
\end{aligned},$$

with $\tau_{\text{move}} = K_\theta + W_\theta + R_\theta$ combining control parameters. These controllers enable real-time trajectory adjustments, allowing the UAV drones to协同执行搜索任务 while avoiding obstacles and maintaining formation. The integration of ultrasonic sensors further aids in obstacle detection, enhancing safety during collaborative missions.

To validate our approach, we conducted extensive experiments using EVO MAX and MDCV3 UAV drones in an open area with simulated communication and obstacle settings. Multiple search tasks were defined, as summarized in Table 1.

Table 1: Multi-UAV Drone Target Search Tasks
Task ID Target Type Total Targets UAV Drone 1 Targets UAV Drone 2 Targets UAV Drone 3 Targets UAV Drone 4 Targets UAV Drone 5 Targets
1 Cattle 34 5 0 11 17 1
2 Sheep 200 15 29 38 42 76
3 Vehicles 80 22 15 17 21 5
4 Horses 150 31 37 44 21 17
5 E-bikes 120 15 14 18 22 51
6 Pedestrians 100 16 24 20 6 34
7 Hot-air Balloons 30 5 7 9 4 5
8 Buildings 20 3 5 4 3 5

We compared our dynamic Bayesian network-based method with two traditional approaches: a 3D swarm UAV drone parallel multi-target search coordination control method and a GNSS-denied target surveillance multi-UAV drone localization and control method. Performance metrics included target omission rate $\eta$, collision risk coefficient $F$, and collaborative search control success rate $P$, defined as:

$$\eta = \left(1 – \frac{n_{\text{search}}}{n_m}\right) \times 100\%,$$

$$F = \frac{1}{d \kappa_{\text{eff}} + 1} \cdot \frac{\Delta \upsilon}{\upsilon_{\text{max}}},$$

$$P = \frac{p_i}{p_j} \times 100\%,$$

where $n_{\text{search}}$ is the number of targets found, $n_m$ is the total targets, $d$ is inter-UAV drone distance, $\kappa_{\text{eff}}$ is a risk factor, $\Delta \upsilon$ is relative speed, $\upsilon_{\text{max}}$ is maximum speed, $p_i$ is successful task count, and $p_j$ is total tasks.

The experimental results, as shown in Table 2, demonstrate the superiority of our method in target search performance.

Table 2: Target Search Performance Comparison
Task ID 3D Swarm Method Output GNSS-Denied Method Output Our Dynamic Bayesian Method Output
1 30 32 34
2 193 196 199
3 73 78 80
4 142 144 150
5 111 115 120
6 94 97 99
7 20 21 29
8 14 16 18

The average target omission rates were 8.1% for the 3D swarm method, 3.8% for the GNSS-denied method, and only 0.4% for our dynamic Bayesian network-based approach. This highlights the accuracy of our UAV drone target localization. Regarding collision risk, our method maintained a low coefficient between 0.05 and 0.08, compared to 0.23–0.63 for the 3D swarm method and 0.18–0.56 for the GNSS-denied method, indicating enhanced safety for multi-UAV drone operations. The collaborative search control success rate averaged 58% for our method, significantly higher than the alternatives, as illustrated in Figure 10 (simulated data). These outcomes validate the effectiveness of our framework in improving search coverage, reducing conflicts, and achieving reliable target tracking with a fleet of UAV drones.

In conclusion, our research on multi-UAV drone target collaborative search control under a dynamic Bayesian network offers a robust solution for complex search missions. By integrating probabilistic modeling, adaptive path planning, and real-time control, we address key challenges such as target loss in sparse environments and collision risks. The experimental results confirm that our method achieves low omission rates, minimal collision risks, and high success rates, making it suitable for applications like disaster response, environmental monitoring, and urban surveillance. Future work could extend this framework to incorporate more advanced machine learning techniques or larger-scale UAV drone fleets, further enhancing collaborative capabilities. Overall, this study contributes to the advancement of autonomous UAV drone systems, paving the way for smarter and more efficient aerial operations.

Scroll to Top