The rapid advancement of China UAV drone technology has seen their deployment proliferate across diverse sectors, including precision agriculture, environmental monitoring, and disaster response. As a prominent category, quadrotor unmanned aerial vehicles (QUAVs) are favored for their structural simplicity and operational agility. However, the inherent limitations of a single China UAV drone, such as constrained payload, endurance, and task complexity handling, become apparent when confronting large-scale or multifaceted missions. To overcome these barriers, the coordination of multiple QUAVs into cohesive formations has emerged as a critical research frontier, enabling enhanced efficiency, robustness, and capability through collaborative operation.

Traditional formation control methodologies for multi-China UAV drone systems, such as finite-time, fixed-time, and predefined-time control, often face trade-offs between convergence speed, control effort, and parameter tuning complexity. For instance, while finite-time control guarantees convergence within a bounded time, this duration depends on initial conditions and can be prolonged. Fixed-time control decouples convergence time from initial states but ties it intricately to controller parameters, making precise tuning challenging. Predefined-time control allows direct specification of the settling time but may demand excessively large initial control inputs to meet aggressive time constraints, straining the China UAV drone‘s actuators. Furthermore, these approaches typically involve continuous or high-frequency updates of control signals, consuming precious computational and communication resources onboard each vehicle.
This paper addresses these challenges by proposing a novel Event-Triggered Two-Level Tandem Game-Theoretic (ET-TLTGT) control algorithm for the formation control of multiple China UAV drone systems. The core idea is to deconstruct the global formation task into a series of localized strategic interactions modeled as non-cooperative games. The solution seeks control strategies that drive the system to a Nash equilibrium state, where no individual QUAV can unilaterally improve its own performance metric (e.g., reducing its position and velocity errors relative to the desired formation and a virtual leader). Crucially, an event-triggered mechanism is embedded to regulate strategy updates, ensuring they occur only when necessary, thereby significantly conserving the limited onboard resources of each China UAV drone.
The remainder of this paper is structured as follows. First, the dynamic model for a generic quadrotor China UAV drone is established. Subsequently, the detailed architecture of the ET-TLTGT control algorithm is presented, encompassing the design of the two-level game framework, the derivation of Nash equilibrium strategies, the adaptive parameter tuning mechanism, and the integration of the event-triggered condition. Finally, the effectiveness and feasibility of the proposed approach are rigorously validated through comparative numerical simulations and real-world platform tests.
1. Dynamic Modeling of a Quadrotor UAV
We consider a fleet of n quadrotor China UAV drones. The kinematics and dynamics for the i-th QUAV (i = 1, 2, …, n) can be described as follows. The translational kinematics is given by:
$$ \dot{\mathbf{P}}_i = \mathbf{V}_i $$
where $\mathbf{P}_i = [p_{i,x}, p_{i,y}, p_{i,z}]^T \in \mathbb{R}^3$ and $\mathbf{V}_i = [v_{i,x}, v_{i,y}, v_{i,z}]^T \in \mathbb{R}^3$ denote the position and velocity vectors in the inertial frame, respectively.
The translational dynamics, under standard assumptions (rigid body, symmetric structure, negligible aerodynamic moments), is expressed as:
$$
\begin{aligned}
\ddot{p}_{i,x} &= -\frac{k_{i,x}}{m_i} \dot{p}_{i,x} + \frac{T_i}{m_i}(c\phi_i s\theta_i c\psi_i + s\phi_i s\psi_i) \\
\ddot{p}_{i,y} &= -\frac{k_{i,y}}{m_i} \dot{p}_{i,y} + \frac{T_i}{m_i}(c\phi_i s\theta_i s\psi_i – s\phi_i c\psi_i) \\
\ddot{p}_{i,z} &= -\frac{k_{i,z}}{m_i} \dot{p}_{i,z} + \frac{T_i}{m_i}c\phi_i c\theta_i – g
\end{aligned}
$$
Here, $m_i$ is the mass; $T_i$ is the total thrust; $k_i = [k_{i,x}, k_{i,y}, k_{i,z}]^T$ represents the aerodynamic damping coefficients; $g$ is the gravitational acceleration; $\phi_i$, $\theta_i$, and $\psi_i$ are the roll, pitch, and yaw angles, respectively; and $c(\cdot)$ and $s(\cdot)$ denote $\cos(\cdot)$ and $\sin(\cdot)$.
Let $\mathbf{S}_i = [s_{i,x}, s_{i,y}, s_{i,z}]^T$ represent the virtual control input, defined as the projection of the total thrust vector onto the inertial frame axes:
$$
\mathbf{S}_i =
\begin{bmatrix}
T_i(c\phi_i s\theta_i c\psi_i + s\phi_i s\psi_i) \\
T_i(c\phi_i s\theta_i s\psi_i – s\phi_i c\psi_i) \\
T_i c\phi_i c\theta_i
\end{bmatrix}
$$
Under the assumption that a constant strategy $\mathbf{S}_i$ is applied over a short time interval $t_s$, the solution to the dynamics yields explicit expressions for velocity and position at time $t_s$:
$$
\mathbf{V}_i(t_s) = \boldsymbol{\kappa}_i (\mathbf{S}_i – m_i g \mathbf{e}_3) + \boldsymbol{\lambda}_i \mathbf{V}_i(t_0) \tag{1}
$$
$$
\mathbf{P}_i(t_s) = \mathbf{P}_i(t_0) + \boldsymbol{\mu}_i (\mathbf{S}_i – m_i g \mathbf{e}_3) – \boldsymbol{\gamma}_i \mathbf{V}_i(t_0) \tag{2}
$$
where $\mathbf{e}_3 = [0,0,1]^T$, $\mathbf{V}_i(t_0)$ and $\mathbf{P}_i(t_0)$ are the initial states, and $\boldsymbol{\kappa}_i$, $\boldsymbol{\lambda}_i$, $\boldsymbol{\mu}_i$, $\boldsymbol{\gamma}_i$ are $3 \times 3$ diagonal matrices with elements for $j \in \{x, y, z\}$:
$$
\kappa_{i,j} = \frac{1 – e^{-\frac{k_{i,j}}{m_i}t_s}}{k_{i,j}}, \quad
\lambda_{i,j} = e^{-\frac{k_{i,j}}{m_i}t_s}
$$
$$
\mu_{i,j} = \frac{t_s}{k_{i,j}} + \frac{m_i}{k_{i,j}^2}(e^{-\frac{k_{i,j}}{m_i}t_s} – 1), \quad
\gamma_{i,j} = \frac{m_i}{k_{i,j}}(e^{-\frac{k_{i,j}}{m_i}t_s} – 1)
$$
These equations form the predictive model that links a chosen control strategy $\mathbf{S}_i$ to the future state of the China UAV drone, which is fundamental for the game-theoretic optimization.
2. Event-Triggered Two-Level Tandem Game-Theoretic Control
The proposed ET-TLTGT control framework decomposes the complex formation task into a hierarchy of strategic decision-making layers. The first layer involves pairwise games between communicating neighbors to generate intermediate guidance strategies. The second layer involves an internal game for each individual China UAV drone to fuse these strategies into a single, optimal control command. An event-triggering mechanism oversees the entire process to minimize unnecessary computations and communications.
2.1 First-Level Game: Inter-QuAD Strategy Generation
Consider two QUAVs, i and j, that can communicate with each other. The first-level game consists of two parallel sub-games played between this pair.
Sub-Game 1: Position Strategy Game. The objective is to find strategies $\mathbf{S}^p_i$ and $\mathbf{S}^p_j$ that minimize the predicted formation and tracking errors after time $t_s$. We define the cost functions for QUAV i and j, respectively:
$$
\begin{aligned}
J^p_{ij}(\mathbf{S}^p_i, \mathbf{S}^p_j) &= \alpha \| \mathbf{P}_i(t_s) – \mathbf{P}_j(t_s) – \mathbf{P}^d_i + \mathbf{P}^d_j \|^2 \\
&\quad + \beta \| \mathbf{P}_i(t_s) – \mathbf{P}(t_s) – \mathbf{P}^d_i \|^2 \\
J^p_{ji}(\mathbf{S}^p_j, \mathbf{S}^p_i) &= \alpha \| \mathbf{P}_j(t_s) – \mathbf{P}_i(t_s) – \mathbf{P}^d_j + \mathbf{P}^d_i \|^2 \\
&\quad + \beta \| \mathbf{P}_j(t_s) – \mathbf{P}(t_s) – \mathbf{P}^d_j \|^2
\end{aligned}
$$
where $\mathbf{P}^d_i$ and $\mathbf{P}^d_j$ are the desired relative positions in the formation, $\mathbf{P}(t_s)$ is the virtual leader’s predicted position, and $\alpha, \beta \in (0,1)$ with $\alpha + \beta = 1$ are weighting factors balancing formation cohesion and leader tracking.
Substituting the predictive model from Eq. (2), the cost function $J^p_{ij}$ can be decomposed. Taking the x-axis component as an example:
$$
\begin{aligned}
j^p_{ij,x}(s^p_{i,x}, s^p_{j,x}) = & \alpha \big( p_{i,x}(t_0) + \mu_{i,x} s^p_{i,x} – \gamma_{i,x} v_{i,x}(t_0) – p_{j,x}(t_0) \\
& – \mu_{j,x} s^p_{j,x} + \gamma_{j,x} v_{j,x}(t_0) – p^d_{i,x} + p^d_{j,x} \big)^2 \\
+ & \beta \big( p_{i,x}(t_0) + \mu_{i,x} s^p_{i,x} – \gamma_{i,x} v_{i,x}(t_0) – p_x(t_0) \\
& – v_x(t_0)t_s – \frac{1}{2}a_x(t_0)t_s^2 – p^d_{i,x} \big)^2
\end{aligned}
$$
The Nash equilibrium solution $\mathbf{s}^p_x = [s^p_{i,x}, s^p_{j,x}]^T$ for this sub-game satisfies $\partial j^p_{ij,x} / \partial s^p_{i,x} = 0$ and $\partial j^p_{ji,x} / \partial s^p_{j,x} = 0$. This leads to a linear system:
$$
\mathbf{H}^p_x \mathbf{s}^p_x = \mathbf{C}^p_x
$$
where
$$
\mathbf{H}^p_x = \begin{bmatrix}
2\mu_{i,x}^2 & -2\alpha \mu_{i,x}\mu_{j,x} \\
-2\alpha \mu_{i,x}\mu_{j,x} & 2\mu_{j,x}^2
\end{bmatrix}, \quad
\mathbf{C}^p_x = \begin{bmatrix} c^p_{i,x} \\ c^p_{j,x} \end{bmatrix}
$$
Since $|\mathbf{H}^p_x| = 4\beta(2\alpha+\beta)\mu_{i,x}^2\mu_{j,x}^2 > 0$, a unique Nash equilibrium exists. Solving this for all axes yields the optimal position strategies $\mathbf{S}^{p*}_i$ and $\mathbf{S}^{p*}_j$.
Sub-Game 2: Velocity Strategy Game. Similarly, to align velocities within the formation and with the leader, we define cost functions:
$$
\begin{aligned}
J^v_{ij}(\mathbf{S}^v_i, \mathbf{S}^v_j) &= \alpha \| \mathbf{V}_i(t_s) – \mathbf{V}_j(t_s) \|^2 + \beta \| \mathbf{V}_i(t_s) – \mathbf{V}(t_s) \|^2 \\
J^v_{ji}(\mathbf{S}^v_j, \mathbf{S}^v_i) &= \alpha \| \mathbf{V}_j(t_s) – \mathbf{V}_i(t_s) \|^2 + \beta \| \mathbf{V}_j(t_s) – \mathbf{V}(t_s) \|^2
\end{aligned}
$$
Using the predictive model from Eq. (1) and following a similar derivation, the Nash equilibrium for the x-axis velocity strategy is given by:
$$
\mathbf{H}^v_x \mathbf{s}^v_x = \mathbf{C}^v_x
$$
where $\mathbf{s}^v_x = [s^v_{i,x}, s^v_{j,x}]^T$ and
$$
\mathbf{H}^v_x = \begin{bmatrix}
2\kappa_{i,x}^2 & -2\alpha \kappa_{i,x}\kappa_{j,x} \\
-2\alpha \kappa_{i,x}\kappa_{j,x} & 2\kappa_{j,x}^2
\end{bmatrix}, \quad
\mathbf{C}^v_x = \begin{bmatrix} c^v_{i,x} \\ c^v_{j,x} \end{bmatrix}
$$
Again, $|\mathbf{H}^v_x| = 4\beta(\alpha+\beta)\kappa_{i,x}^2\kappa_{j,x}^2 > 0$ guarantees a unique solution, providing the optimal velocity strategies $\mathbf{S}^{v*}_i$ and $\mathbf{S}^{v*}_j$.
Each China UAV drone i engages in this first-level game with all neighbors in its communication set $\mathcal{Q}_i$, resulting in two sets of strategy pairs: $\mathcal{S}^p_i = \{\mathbf{S}^p_{i,1}, …, \mathbf{S}^p_{i,n_i}\}$ and $\mathcal{S}^v_i = \{\mathbf{S}^v_{i,1}, …, \mathbf{S}^v_{i,n_i}\}$.
2.2 Second-Level Game: Intra-QuAD Gain Optimization
The second level is an internal game played within each individual China UAV drone. The players in this game are two synthesized control inputs derived from the first-level strategies. First, the strategies in $\mathcal{S}^p_i$ and $\mathcal{S}^v_i$ are fused using a weighted average based on real-time error feedback. Let $e^p_{ik} = \|\mathbf{P}_i – \mathbf{P}_k – \mathbf{P}^d_i + \mathbf{P}^d_k\|$ and $e^v_{ik} = \|\mathbf{V}_i – \mathbf{V}_k\|$ for neighbor $k$. The weight for strategy from neighbor $k$ is calculated proportionally to the inverse of the corresponding error (normalized so that weights sum to 1).
The aggregated strategies for the position and velocity components are:
$$
\bar{\mathbf{S}}^p_i = \sum_{k \in \mathcal{Q}_i} \text{diag}(\mathbf{w}^p_{ik}) \mathbf{S}^p_{i,k}, \quad
\bar{\mathbf{S}}^v_i = \sum_{k \in \mathcal{Q}_i} \text{diag}(\mathbf{w}^v_{ik}) \mathbf{S}^v_{i,k}
$$
These aggregated strategies, $\bar{\mathbf{S}}^p_i$ and $\bar{\mathbf{S}}^v_i$, become the two “players” in the second-level game. The decision variable for each player is a gain vector that scales its contribution to the final control command. The final control input for QUAV i is proposed as:
$$
\mathbf{S}_i = \text{diag}(\mathbf{w}^p_i) \bar{\mathbf{S}}^p_i + \text{diag}(\mathbf{w}^v_i) \bar{\mathbf{S}}^v_i \tag{3}
$$
where $\mathbf{w}^p_i = [w^p_{i,x}, w^p_{i,y}, w^p_{i,z}]^T$ and $\mathbf{w}^v_i = [w^v_{i,x}, w^v_{i,y}, w^v_{i,z}]^T$ are the gain vectors to be optimized. Player 1 ($\bar{\mathbf{S}}^p_i$) aims to minimize the future position tracking error while preferring its gain to remain near 1 (i.e., not deviating drastically from the aggregated suggestion). Player 2 ($\bar{\mathbf{S}}^v_i$) has the analogous objective for velocity error.
Their respective cost functions for the x-axis components are defined as:
$$
\begin{aligned}
J^v_{i,x}(w^v_{i,x}, w^p_{i,x}) &= \big( V_{i,x}(t_s) – V_x(t_s) \big)^2 + \big( w^v_{i,x} \bar{s}^v_{i,x} – \bar{s}^v_{i,x} \big)^2 \\
J^p_{i,x}(w^p_{i,x}, w^v_{i,x}) &= \big( P_{i,x}(t_s) – P_x(t_s) – P^d_{i,x} \big)^2 + \big( w^p_{i,x} \bar{s}^p_{i,x} – \bar{s}^p_{i,x} \big)^2
\end{aligned}
$$
Substituting the predictive models (Eqs. (1)&(2)) and the control law (Eq. (3)), and setting the partial derivatives to zero ($\partial J^v_{i,x}/\partial w^v_{i,x}=0$, $\partial J^p_{i,x}/\partial w^p_{i,x}=0$), we obtain the Nash equilibrium condition for the gain pair $\mathbf{w}_x = [w^v_{i,x}, w^p_{i,x}]^T$:
$$
\mathbf{H}^w_x \mathbf{w}_x = \mathbf{C}^w_x
$$
where
$$
\mathbf{H}^w_x = \begin{bmatrix}
2[(\kappa_{i,x}\bar{s}^v_{i,x})^2 + (\bar{s}^v_{i,x})^2] & 2\kappa_{i,x}\mu_{i,x}\bar{s}^p_{i,x}\bar{s}^v_{i,x} \\
2\kappa_{i,x}\mu_{i,x}\bar{s}^p_{i,x}\bar{s}^v_{i,x} & 2[(\mu_{i,x}\bar{s}^p_{i,x})^2 + (\bar{s}^p_{i,x})^2]
\end{bmatrix}
$$
The determinant is $|\mathbf{H}^w_x| = 4[(\kappa_{i,x}\bar{s}^v_{i,x})^2 + (\bar{s}^v_{i,x})^2][(\mu_{i,x}\bar{s}^p_{i,x})^2 + (\bar{s}^p_{i,x})^2] – 4(\kappa_{i,x}\mu_{i,x}\bar{s}^p_{i,x}\bar{s}^v_{i,x})^2 > 0$, ensuring a unique Nash equilibrium solution for the optimal gains. Solving this for all axes yields $\mathbf{w}^{p*}_i$ and $\mathbf{w}^{v*}_i$, which are then used in Eq. (3) to compute the final, optimal control strategy $\mathbf{S}^*_i$ for the China UAV drone i.
2.3 Adaptive Tuning of the Prediction Horizon
The prediction horizon $t_s$ is a critical parameter influencing the aggressiveness and stability of the control. A small $t_s$ leads to short-sighted, reactive control, while a large $t_s$ may cause overshoot. We propose an adaptive law to tune $t_s$ online based on the overall formation error. Let $e^p_{\max}$ be the maximum position error among all communicating QUAV pairs at the current time, and $e^p_0$ be its initial value. The adaptive law is:
$$
t_s = \exp\left( \frac{e^p_{\max} – e^p_0}{e^p_0} \right)
$$
This formulation ensures $t_s \approx 1$ when errors are near initial levels. As formation errors decrease ($e^p_{\max} < e^p_0$), $t_s$ becomes less than 1, promoting smoother, longer-horizon optimization. If errors increase, $t_s$ grows, prompting more immediate corrective actions. This adaptive mechanism enhances the robustness and performance of the China UAV drone formation across different operational phases.
2.4 Event-Triggered Mechanism
To conserve computational and communication resources, the entire two-level game-theoretic optimization is not executed at every control time step. Instead, an event-triggering condition governs when a QUAV needs to recompute its strategy.
For a QUAV i and its neighbor j, we define a combined error metric: $e_{ij} = e^p_{ij} + e^v_{ij}$. Let $\mathcal{E}$ be a predefined error threshold. The event is triggered for the pair (i, j) if and only if:
$$
e_{ij} \geq \mathcal{E} \quad \text{AND} \quad \dot{e}_{ij} > 0
$$
The first condition indicates that the error has exceeded an acceptable level. The second condition, $\dot{e}_{ij} > 0$, ensures triggering only when the error is deteriorating; if the error is large but decreasing (meaning the current strategy is effectively correcting it), no re-computation is needed. When the event is triggered for any neighbor pair involving QUAV i, QUAV i initiates a new two-level game-theoretic optimization cycle to update its control strategy $\mathbf{S}_i$. Otherwise, it holds the previous optimal strategy. This mechanism dramatically reduces the frequency of computationally intensive game solutions while maintaining formation performance, a crucial advantage for resource-constrained China UAV drone platforms.
3. Simulation, Comparative Analysis, and Experimental Validation
To validate the proposed ET-TLTGT control algorithm, we conducted extensive numerical simulations and real-platform tests involving a fleet of China UAV drones.
3.1 Simulation Setup and Parameters
A system of six homogeneous QUAVs was simulated. Their initial states and desired formation offsets relative to a virtual leader are summarized in Table 1. The communication topology was undirected, as shown in Figure 1. The virtual leader followed a 3D Lissajous-like trajectory: $\mathbf{P}(t) = [10\sin(0.1t), 10\cos(0.1t)-10, 0.5t]^T$ m. Key controller parameters are listed in Table 2.
| QUAV ID | Initial Position [m] | Initial Velocity [m/s] | Desired Offset $\mathbf{P}^d_i$ [m] |
|---|---|---|---|
| 1 | [1, 2, 0] | [0, 0, 0] | [1, √3, 0] |
| 2 | [4, 3, 0] | [0, 0, 0] | [2, 0, 0] |
| 3 | [4, -1, 0] | [0, 0, 0] | [1, -√3, 0] |
| 4 | [2, -3, 0] | [0, 0, 0] | [-1, -√3, 0] |
| 5 | [-5, -2, 0] | [0, 0, 0] | [-2, 0, 0] |
| 6 | [-2, 5, 0] | [0, 0, 0] | [-1, √3, 0] |
| Parameter | Symbol | Value |
|---|---|---|
| Formation Weight | $\alpha$ | 0.8 |
| Tracking Weight | $\beta$ | 0.2 |
| Event Threshold | $\mathcal{E}$ | 0.05 |
| QUAV Mass | $m_i$ | 1.121 kg |
| Damping Coefficient | $k_{i,j}$ | 0.01 N·s/m |
3.2 Simulation Results and Performance
The 3D trajectories over 40 seconds show that all six China UAV drones successfully converge to the desired hexagonal formation around the virtual leader’s path within approximately 5 seconds and maintain it precisely thereafter. The control inputs for all QUAVs are smooth and bounded. The position and velocity tracking errors for each QUAV converge to near-zero values rapidly, demonstrating the effectiveness of the game-theoretic optimizer in driving the system to the Nash equilibrium (corresponding to perfect formation).
A key performance metric is the event-triggered update rate. The average frequency at which each QUAV recalculated its control strategy (i.e., executed the two-level game) is presented in Table 3. The rates are significantly below 100%, confirming that the event-triggering mechanism successfully reduces computational load without compromising performance.
| QUAV ID | Update Frequency |
|---|---|
| 1 | 36.18% |
| 2 | 33.58% |
| 3 | 34.49% |
| 4 | 32.78% |
| 5 | 34.25% |
| 6 | 33.88% |
3.3 Comparative Analysis with Finite-Time Control
To objectively evaluate the advantages of our method, a comparative simulation was conducted against a well-established finite-time formation control (FTFC) algorithm. A 3-QUAV system with a circular leader trajectory was used for both controllers under identical initial conditions and communication topology.
The comparative results are striking. Table 4 summarizes the key performance indicators. The ET-TLTGT controller achieves convergence roughly 34.6% faster than the FTFC method. Furthermore, the steady-state formation error (RMSE after convergence) is reduced by over 63% using our game-theoretic approach. This demonstrates the superior optimization capability of seeking a Nash equilibrium, which balances all agents’ objectives simultaneously, leading to faster and more precise collective behavior. The event-triggered update frequencies for this scenario, shown in Table 5, remained below 44%, underscoring the resource efficiency of the proposed method for China UAV drone swarms.
| Performance Metric | ET-TLTGT (Proposed) | Finite-Time Control (FTFC) | Improvement |
|---|---|---|---|
| Convergence Time [s] | 6.77 | 10.35 | 34.59% faster |
| Steady-State Position Error RMSE [m] | 0.0191 | 0.0521 | 63.34% lower |
| QUAV ID | Update Frequency |
|---|---|
| 1 | 43.49% |
| 2 | 42.86% |
| 3 | 42.11% |
3.4 Real-Platform Experimental Validation
To further substantiate the practical feasibility of the algorithm for real-world China UAV drone applications, an experimental test was conducted using a platform of three identical quadrotor UAVs operating in a controlled indoor environment with a motion capture system. The communication topology and controller parameters (α, β, $\mathcal{E}$) were kept consistent with the simulations. The virtual leader followed the trajectory: $\mathbf{P}(t) = [\sin(0.1t), \cos(0.1t), 0.05t]^T$ m.
The experimental sequence successfully demonstrated the complete workflow: from initial hover, through the formation convergence phase, to stable formation tracking. The physical QUAVs accurately formed the desired triangular pattern and tracked the moving leader. The recorded position and velocity errors converged to and remained within small bounds, mirroring the simulation results. Crucially, the real-world strategy update frequencies were even lower than in simulation (see Table 6), highlighting the algorithm’s effectiveness in minimizing onboard computation during stable tracking—a vital feature for extending the flight time of battery-powered China UAV drones.
| QUAV ID | Update Frequency |
|---|---|
| 1 | 24.37% |
| 2 | 23.42% |
| 3 | 22.22% |
4. Conclusion
This paper has presented a novel Event-Triggered Two-Level Tandem Game-Theoretic (ET-TLTGT) control algorithm to address the formation control challenge for multi-China UAV drone systems. The proposed method innovatively decomposes the global formation task into a structured sequence of non-cooperative games. In the first level, communicating QUAV pairs engage in parallel games to derive optimal position and velocity correction strategies. In the second level, each QUAV internally plays a fusion game to determine the optimal weighting gains for combining these strategies into a single, Nash equilibrium control command. An adaptive law tunes the prediction horizon online, and an event-triggering mechanism judiciously activates the game-solving process only when necessary.
The comprehensive simulation studies and real-platform experiments conclusively demonstrate the algorithm’s salient advantages: 1) Rapid and Precise Convergence: The game-theoretic pursuit of Nash equilibrium enables faster formation assembly and higher steady-state accuracy compared to conventional finite-time control. 2) Resource Efficiency: The embedded event-triggering mechanism reduces the frequency of control strategy updates by 55-75%, significantly saving the computational and communication resources of each China UAV drone. 3) Practical Feasibility: The successful implementation on a real quadrotor testbed confirms the algorithm’s readiness for real-world applications.
Future work will focus on extending this framework to handle more dynamic scenarios, such as formation control in the presence of obstacles, adversarial agents, or under switching communication topologies, further advancing the autonomy and resilience of collaborative China UAV drone systems.
