The breathtaking spectacle of a drone light show, where hundreds or even thousands of unmanned aerial vehicles (UAVs) move in perfect, luminous harmony against the night sky, represents one of the most captivating and technically demanding applications of multi-agent robotics. Ensuring the precise, safe, and reliable execution of these intricate aerial ballets requires advanced control systems capable of managing complex formations while withstanding the inevitable uncertainties of real-world operation. A critical challenge in maintaining such a flawless performance is the potential for actuator failures within individual drones—issues like motor efficiency loss, propeller damage, or complete rotor seizure. These faults, if not properly managed, can lead to erratic movements, disrupt the entire formation’s geometry, and potentially cause catastrophic collisions, ruining the artistic vision and posing safety risks.

Traditional distributed formation control strategies, where each drone coordinates directly with its neighbors, are susceptible to the propagation of such faults through the communication network. An error or failure in one unit can couple with others, leading to a cascading degradation of the entire swarm’s performance, a scenario utterly unacceptable for a high-stakes drone light show. This paper addresses this fundamental limitation by proposing a novel, robust control framework specifically designed to ensure the resilience and precision of quadrotor UAV formations in the presence of actuator faults and external disturbances, directly contributing to the reliability of next-generation drone light shows.
At the core of our proposed solution is a two-layer hierarchical control architecture ingeniously designed to decouple the formation coordination logic from the individual drone’s fault-prone physical actuation. The upper layer, termed the Virtual Formation Coordination Layer, operates in a purely informational space. It consists of virtual agents that calculate the desired formation trajectory for each physical drone using a distributed consensus protocol combined with a sliding mode algorithm to guarantee finite-time convergence of the formation shape. Crucially, this layer only communicates the final, consistent reference trajectories to the lower layer, completely isolating itself from the physical realities of motor faults or sensor noise.
The lower layer, the Actual Physical Execution Layer, comprises the real quadrotor drones tasked with executing the drone light show. Each drone independently tracks its assigned trajectory from the virtual layer. To achieve this with high precision and guaranteed transient performance despite faults, we design a sophisticated dual-strategy controller. Recognizing the under-actuated nature of quadrotors—where the desired roll ($\phi$) and pitch ($\theta$) angles are not directly commanded but computed from desired translational motions—we treat the control channels differently. For the channels with known initial state references (position $x$, $y$, $z$, and yaw $\psi$), we develop a Prescribed Performance Backstepping Sliding-Mode Fault-Tolerant Controller (PPBSMFC). This controller uses an exponential performance function to enforce strict, user-defined bounds on the tracking error (e.g., “the position error must stay within 0.1 meters at all times”), employs backstepping for systematic design, and integrates a terminal sliding mode control with a variable-rate reaching law for robustness and fast convergence.
For the channels with derived, initially unknown references ($\phi$, $\theta$), we design an Initial Condition-Independent Prescribed Performance Backstepping Controller (ICPPBFC). This innovative controller utilizes a transformed error via a hyperbolic tangent function and a novel performance function that does not require knowledge of the initial error, effectively constraining the attitude tracking performance from the very beginning of the maneuver, a vital feature for the rapid, coordinated movements in a drone light show.
To handle the core challenge of actuator faults (modeled as loss of effectiveness and bias faults) compounded with external wind gusts or turbulence—common during outdoor drone light shows—we incorporate an adaptive Radial Basis Function Neural Network (RBFNN) observer into each drone’s controller. This observer online estimates the “lumped disturbance,” which encapsulates the effects of both the unknown actuator fault and external interference. The estimate is then fed forward into the control law for real-time compensation, significantly enhancing the system’s fault-tolerant capability. The overall control law for, say, the $x$-position channel in the PPBSMFC takes the form:
$$
u_{xi} = m_i \left[ \frac{K_{xi}}{m_i}x_i – D_{di} + \dot{v}_{oxi} – f_{1xi} – c_{0i}f_{2xi}(\lambda_{xi}) – \sigma_{xi} \lambda_{xi} – \frac{r_{xi}}{m_i} \left( \eta_{xi} \Xi(s_{xi}) |s_{xi}|^{\Delta_{xi}} + \epsilon_{xi} \cdot g(s_{xi}) \cdot \text{sign}(s_{xi}) \right) \right]
$$
where $D_{di} = \hat{\mathbf{w}}_{xi}^T \mathbf{h}_{xi} + \hat{v}_{dxi} \tanh(s_{xi}/b)$ is the RBFNN estimate of the lumped disturbance, $\lambda_{xi}$ is the transformed error constrained by the performance function $\rho_{xi}(t)$, $s_{xi}$ is the terminal sliding surface, and $\Xi(s_{xi}), g(s_{xi})$ are terms from the variable-rate reaching law.
The adaptive laws for the RBFNN weights and robust term are designed using Lyapunov stability theory to ensure global boundedness:
$$
\begin{aligned}
\dot{\hat{\mathbf{w}}}_{xi} &= \eta_1 s_{xi} \mathbf{h}_{xi} \\
\dot{\hat{v}}_{dxi} &= \eta_2 s_{xi} \tanh(s_{xi}/b)
\end{aligned}
$$
| Parameter | Symbol | Value | Unit |
|---|---|---|---|
| Mass | $m_i$ | 2.0 | kg |
| Gravity | $g$ | 9.8 | m/s² |
| Arm Length | $l$ | 0.2 | m |
| Rotor Inertia | $J_{\phi,\theta,\psi}$ | 0.01 | kg·m² |
| Drag Coefficients | $K_{x,y,z,\phi,\theta,\psi}$ | 0.01 | Ns/m or Nms/rad |
| Body Inertia ($x,y$) | $I_{xi}, I_{yi}$ | 1.25 | kg·m² |
| Body Inertia ($z$) | $I_{zi}$ | 2.50 | kg·m² |
The hierarchical structure offers a key advantage: computational scalability essential for large-scale drone light shows. The virtual layer’s complexity is linear, $O(N)$, with the number of drones $N$. In the physical layer, each drone’s controller and observer run independently. The RBFNN observer, requiring only a small number of neurons (e.g., 5), has constant complexity $O(1)$ per drone. Therefore, the overall algorithm scales as $O(N)$, a significant improvement over traditional distributed approaches that can scale as $O(N^2)$ due to all-to-all or dense neighbor state calculations. This makes the framework suitable for coordinating the hundreds of drones involved in a commercial drone light show.
We validate the proposed framework through comprehensive numerical simulations in MATLAB/Simulink, modeling a formation of four quadrotor UAVs. The desired trajectory is a helical ascent, a common pattern in drone light shows: $[x_d, y_d, z_d, \psi_d] = [\sin(\pi t/5), \cos(\pi t/5), 0.2t, \pi/2]$. The formation maintains a square shape around a virtual leader. Two critical test scenarios are examined.
| Parameter Group | Symbol | Value |
|---|---|---|
| Performance Function | $\rho_{0x}, \rho_{0y}, \rho_{0z}, \rho_{0\psi}$ | 0.6, 0.8, 1.2, 1.0 |
| $\rho_{\infty x}, \rho_{\infty y}, \rho_{\infty z}, \rho_{\infty \psi}$ | 0.1, 0.08, 0.05, 0.05 | |
| $\kappa_{x}, \kappa_{y}, \kappa_{z}, \kappa_{\psi}$ | 3.0, 3.5, 3.0, 3.0 | |
| $\sigma_{x}, \sigma_{y}, \sigma_{z}, \sigma_{\psi}$ | 2.0, 2.0, 2.0, 10.0 | |
| Sliding Surface | $c_{0i}$ | 0.1 |
| $c_{1i}$ | 3.0 | |
| $c_{2i}$ | 3.0 | |
| Reaching Law | $\eta_{x,y,z,\psi}$ | 2.0, 2.0, 2.0, 8.0 |
| $\epsilon_{x,y,z,\psi}$ | 1.0 | |
| $\Delta_{x,y,z,\psi}$ | 0.01 | |
| $\Xi_{x,y,z,\psi}$ | 10.0 |
In the first, ideal scenario (no faults/disturbances), the proposed controller demonstrates superior convergence. The tracking errors for all drones rapidly converge to zero within approximately 1 second, strictly remaining within the predefined performance bounds (e.g., $|e_x(t)| < \rho_x(t)$). In contrast, several benchmark controllers—Robust Global Fast Terminal Sliding Mode Control (RGFTSMC), Adaptive Barrier Function Fast Terminal Sliding Mode Control (ABFTSMC), and Fuzzy Adaptive Sliding Mode Control (FASMC)—exhibit slower convergence (~2 seconds) and, in some cases, noticeable overshoot. This highlights the efficacy of the prescribed performance bound in shaping the transient response, a feature that ensures smooth and predictable movements vital for a visually pleasing drone light show.
The second, more rigorous scenario introduces both external disturbances, $d_i = 0.6\sin(t) + 0.5e^{-0.001t}\cos(t)$, and severe actuator faults. Specifically, at $t=8s$, UAV1 and UAV3 experience a 40% loss of effectiveness combined with a sinusoidal bias fault $f=0.1\sin(0.5\pi t)$ in their vertical thrust actuators. At $t=12s$, additional faults occur in their roll, pitch, and yaw actuators. This tests the core fault-tolerant and disturbance rejection capabilities of the system.
The results are compelling. Under the proposed hierarchical control, the faulty drones (UAV1, UAV3) maintain their tracking performance with only minor, brief deviations, thanks to the rapid and accurate estimation of the lumped disturbance by the RBFNN observer and the robust control law. Most importantly, the non-faulty drones (UAV2, UAV4) continue their flight unaffected; their tracking errors show no degradation when the faults occur in their neighbors. This perfectly illustrates the fault-containment property of the hierarchical architecture: the physical-layer fault is compensated locally and does not propagate through the virtual coordination layer to corrupt the reference trajectories of other units. This is a critical safety feature for a drone light show, preventing a single point of failure from collapsing the entire formation.
The benchmark controllers, which use traditional distributed fault-tolerant strategies, show significant vulnerability. When a fault occurs in one drone, the tracking error of that drone spikes (e.g., $e_z$ exceeds 3.5m). More critically, due to error coupling in their consensus protocol, this fault-induced error propagates, causing the tracking error of the healthy neighboring drones to also increase substantially (e.g., $e_z$ for a healthy drone grows to 0.5m). This cascading effect could lead to visible formation distortion or even collisions in a real drone light show.
| Condition & Axis | RGFTSMC | ABFTSMC | FASMC | Proposed Method |
|---|---|---|---|---|
| Ideal: RMSEx (m) | 0.1258 | 0.1304 | 0.1192 | 0.0294 |
| Ideal: RMSEy (m) | 0.2254 | 0.2900 | 0.2244 | 0.0454 |
| Ideal: RMSEz (m) | 0.1188 | 0.1247 | 0.0849 | 0.0757 |
| Ideal: RMSEψ (rad) | 0.0256 | 0.0369 | 0.0302 | 0.0229 |
| Faulty: RMSEx (m) | 0.1444 | 0.1608 | 0.1368 | 0.0296 |
| Faulty: RMSEy (m) | 0.2545 | 0.3211 | 0.2280 | 0.0454 |
| Faulty: RMSEz (m) | 0.2253 | 0.1953 | 0.1777 | 0.0761 |
| Faulty: RMSEψ (rad) | 0.0258 | 0.0371 | 0.0302 | 0.0229 |
Quantitative analysis using the Average Root Mean Square Error (RMSE) across the formation, as shown in Table 3, solidifies these observations. Under both ideal and faulty conditions, the proposed method achieves the lowest RMSE across all positional ($x$, $y$, $z$) and yaw ($\psi$) axes. For instance, in the faulty scenario, the RMSE for the $x$-position is 0.0296 m, which is only about 21.6% of the best-performing benchmark (FASMC at 0.1368 m). This significant reduction in error translates directly to higher formation precision and visual fidelity in a drone light show. Furthermore, the consistency of our method’s RMSE values between the ideal and faulty scenarios—contrasting with the noticeable increase in the benchmarks’ errors—underscores its exceptional robustness and fault tolerance.
From an engineering perspective, the proposed framework is highly feasible for real-world drone light show deployment. The control algorithms are based on well-established principles like backstepping and sliding mode control, which have been successfully implemented on embedded flight controllers. The RBFNN observer is computationally lightweight, requiring only a handful of neurons, making it suitable for real-time execution on modern flight control hardware like Pixhawk or similar autopilots. The hierarchical structure simplifies system integration and testing; the virtual coordination layer can run on a ground control station, streaming reference trajectories to the drones, while each drone runs its independent, robust tracking controller. This modularity enhances system reliability and simplifies the management of large fleets, a necessity for professional drone light show operations.
In conclusion, this study presents a comprehensive and robust solution to the critical problem of fault-tolerant formation control for quadrotor UAVs. By introducing a hierarchical architecture that separates coordination from execution, designing dual prescribed-performance backstepping controllers for different state channels, and integrating an adaptive neural network observer for fault and disturbance compensation, the proposed strategy effectively eliminates error coupling and fault propagation. This ensures that even in the event of partial actuator failures—a realistic concern in extended or demanding drone light show performances—the overall formation integrity and tracking accuracy are maintained. The simulation results confirm superior performance in terms of convergence speed, steady-state precision, and most importantly, resilience to faults, validating the framework’s potential to enhance the reliability and safety of large-scale, complex drone light shows and other precision multi-UAV applications.
