Advanced Control Algorithms for High-Precision Formation Drone Light Shows

The mesmerizing spectacle of a formation drone light show represents one of the most visually stunning applications of modern multi-agent robotics. These aerial ballets, where hundreds or even thousands of unmanned aerial vehicles (UAVs) act in perfect synchrony to create complex, dynamic shapes and animations in the night sky, are a direct testament to advances in distributed control theory. At the heart of every successful formation drone light show lies a critical technological challenge: the precise, robust, and coordinated control of each individual agent within a tightly coupled swarm, despite inherent uncertainties and external disturbances. This article delves into the application of sophisticated nonlinear control strategies, specifically combining Sliding Mode Disturbance Observers (SMDO) with Dynamic Surface Control (DSC), to achieve the high-fidelity performance required for next-generation formation drone light show systems.

1. Core Challenges in Formation Drone Light Show Control

Designing a controller for a formation drone light show is fundamentally different from controlling a single drone or a simple leader-follower pair. The system must guarantee collective behavior where the entire formation’s shape and motion are preserved. Key challenges include:

  • Model Uncertainties & External Disturbances: Each drone’s dynamics are complex and not perfectly known. Manufacturing variances, battery drain affecting mass, and unmodeled aerodynamic effects contribute to model uncertainty. Externally, wind gusts, turbulence, and the downwash from neighboring drones (wake effects) act as significant, persistent disturbances. A formation drone light show controller must be inherently robust to these “lumped uncertain terms.”
  • Communication Topology: Direct communication from a central ground station to every drone is often impractical at scale. The system relies on a distributed communication network where each drone exchanges information only with a subset of neighbors. The controller must function effectively within these topological constraints.
  • Precision and Synchronization: The visual coherence of a formation drone light show demands extreme positional accuracy. Errors propagate through the formation and become glaringly visible as distortions in the intended shapes or animations. The control algorithm must ensure fast convergence and minimal steady-state error for all agents.
  • Computational Efficiency & “Explosion of Terms”: Classical nonlinear control methods like back-stepping can lead to “differential explosion,” where the computation of repeated derivatives for virtual controls becomes analytically complex and computationally burdensome for real-time implementation on embedded hardware. A practical solution for a large-scale formation drone light show must avoid this pitfall.

2. The Virtual Structure Paradigm for Formation Control

A highly effective framework for managing a formation drone light show is the Virtual Structure method. In this approach, the entire formation is treated as a single, rigid (or deformable) body moving through space. A Virtual Leader or reference point defines the trajectory (position, velocity, attitude) of this virtual structure. Each physical drone is assigned a fixed desired position relative to this moving virtual structure.

Let the virtual leader’s motion be defined in an inertial ground frame as:
$$
\begin{align}
\dot{x}_0 &= v_0 \cos \gamma_0 \cos \chi_0 \\
\dot{y}_0 &= v_0 \cos \gamma_0 \sin \chi_0 \\
\dot{z}_0 &= v_0 \sin \gamma_0
\end{align}
$$
where $(x_0, y_0, z_0)$ is its position, $v_0$ is its speed, $\gamma_0$ is its flight path angle, and $\chi_0$ is its heading. The desired offset for Drone $i$ within the virtual structure (e.g., to form a letter “A” or a geometric pattern) is a constant vector $\mathbf{p}_i^{body} = [p_{i,x_0}, p_{i,y_0}, p_{i,z_0}]^T$ defined in the virtual structure’s body frame. This offset is transformed into the inertial frame using the virtual leader’s orientation:
$$
\begin{bmatrix} p_{i,x} \\ p_{i,y} \\ p_{i,z} \end{bmatrix} = \mathbf{T}(\chi_0, \gamma_0) \begin{bmatrix} p_{i,x_0} \\ p_{i,y_0} \\ p_{i,z_0} \end{bmatrix}.
$$
Thus, the desired trajectory for Drone $i$ is $[x_0, y_0, z_0]^T + [p_{i,x}, p_{i,y}, p_{i,z}]^T$. This method provides a globally consistent shape for the formation drone light show.

3. Drone Dynamics with Integrated Uncertainties

We consider a simplified yet representative point-mass kinematic model for a fixed-wing or high-performance multirotor drone, augmented with lumped uncertainty terms. For the $i$-th drone in the formation drone light show:
$$
\begin{aligned}
\dot{x}_i &= v_i \cos \gamma_i \cos \chi_i + \bar{\omega}_{x_i} \\
\dot{y}_i &= v_i \cos \gamma_i \sin \chi_i + \bar{\omega}_{y_i} \\
\dot{z}_i &= v_i \sin \gamma_i + \bar{\omega}_{z_i} \\
\dot{v}_i &= (v_{i_c} – v_i)/\tau_v + \bar{\omega}_{v_i} \\
\dot{\gamma}_i &= (\gamma_{i_c} – \gamma_i)/\tau_{\gamma} + \bar{\omega}_{\gamma_i} \\
\dot{\chi}_i &= (\chi_{i_c} – \chi_i)/\tau_{\chi} + \bar{\omega}_{\chi_i}
\end{aligned}
$$
where $(x_i, y_i, z_i)$ is the position, $v_i$ is the speed, $\gamma_i$ is the flight path angle, and $\chi_i$ is the heading. The control inputs are the commanded speed $v_{i_c}$, flight path angle $\gamma_{i_c}$, and heading $\chi_{i_c}$. The time constants $\tau_v, \tau_{\gamma}, \tau_{\chi}$ represent simplified actuator dynamics. The critical terms $\bar{\omega}_{(\cdot)}$ represent the lumped uncertainties encompassing unmodeled dynamics, parameter variations, and external disturbances—the primary adversaries of a pristine formation drone light show.

4. Sliding Mode Disturbance Observer (SMDO) for Uncertainty Estimation

To actively combat uncertainties, a Sliding Mode Disturbance Observer is employed for each channel. SMDOs are renowned for their finite-time convergence and robustness. For instance, to estimate the disturbance $\bar{\omega}_{x_i}$ affecting the $x$-position dynamics of a drone in the formation drone light show, the following observer is constructed:
$$
\begin{aligned}
\dot{\hat{x}}_i &= v_i \cos \gamma_i \cos \chi_i + \upsilon_i \\
\upsilon_i &= -\lambda_1 |\hat{x}_i – x_i|^{1/2} \text{sgn}(\hat{x}_i – x_i) + \hat{\bar{\omega}}_{x_i} \\
\dot{\hat{\bar{\omega}}}_{x_i} &= -\lambda_2 \text{sgn}(\hat{\bar{\omega}}_{x_i} – \upsilon_i)
\end{aligned}
$$
where $\hat{x}_i$ is the estimated position, $\hat{\bar{\omega}}_{x_i}$ is the estimated lumped disturbance, and $\lambda_1, \lambda_2 > 0$ are design gains. The super-twisting structure ensures that the estimation error converges to zero in finite time despite the disturbance’s character, providing an accurate estimate $\hat{\bar{\omega}}_{x_i} \approx \bar{\omega}_{x_i}$. Similar observers run in parallel for $\bar{\omega}_{y_i}, \bar{\omega}_{z_i}, \bar{\omega}_{v_i}, \bar{\omega}_{\gamma_i}, \bar{\omega}_{\chi_i}$. This network of observers acts as a “sensory skin” for the formation drone light show, detecting and quantifying disturbances in real-time.

5. Dynamic Surface Control (DSC) for Formation Tracking

With disturbance estimates from the SMDO, we design the formation tracking controller using Dynamic Surface Control. DSC circumvents the “differential explosion” of back-stepping by introducing first-order low-pass filters for the virtual controls. The control objective is to drive the local formation tracking error to zero. For a drone $i$ communicating with neighbors in set $\mathcal{N}_i$, the $x$-axis formation error is:
$$
e_{x_i} = \sum_{j \in \mathcal{N}_i} a_{ij} \left( (x_i – x_j) – (p_{i,x} – p_{j,x}) \right)
$$
where $a_{ij}$ are adjacency weights. Zeroing $e_{x_i}, e_{y_i}, e_{z_i}$ for all drones ensures perfect formation drone light show geometry.

The DSC design proceeds stepwise for the $x$-channel (similarly for $y$ and $z$):

Step 1: Define the first error surface $S_{1_i} = e_{x_i}$. Its derivative is:
$$
\dot{S}_{1_i} = d_{ii}(v_i \cos\gamma_i \cos\chi_i + \hat{\bar{\omega}}_{x_i}) – \sum_{j \in \mathcal{N}_i} a_{ij}(\dot{x}_j + \dot{p}_{ij,x})
$$
where $d_{ii} = \sum_{j \in \mathcal{N}_i} a_{ij}$. Treat $v_i$ as a virtual control. Design the stabilizing function $\alpha_{v_i}$:
$$
\alpha_{v_i} = \frac{ \sum_{j \in \mathcal{N}_i} a_{ij}(\dot{x}_j + \dot{p}_{ij,x}) – k_{1_i} e_{x_i} – d_{ii} \hat{\bar{\omega}}_{x_i} }{ d_{ii} \cos\gamma_i \cos\chi_i }, \quad k_{1_i} > 0.
$$

Step 2: Instead of directly differentiating $\alpha_{v_i}$, pass it through a low-pass filter to obtain a new state variable $\bar{v}_{i_d}$:
$$
\tau_{v} \dot{\bar{v}}_{i_d} + \bar{v}_{i_d} = \alpha_{v_i}, \quad \bar{v}_{i_d}(0) = \alpha_{v_i}(0).
$$
Define the second error surface $S_{2_i} = v_i – \bar{v}_{i_d}$. Its derivative involves the actual control input $v_{i_c}$:
$$
\dot{S}_{2_i} = -\frac{v_i}{\tau_v} + \frac{v_{i_c}}{\tau_v} + \hat{\bar{\omega}}_{v_i} – \dot{\bar{v}}_{i_d}.
$$
The actual control law for the speed command is then designed as:
$$
v_{i_c} = v_i + \tau_v \left( -k_{2_i} S_{2_i} + \dot{\bar{v}}_{i_d} – \hat{\bar{\omega}}_{v_i} \right), \quad k_{2_i} > 0.
$$
The filter dynamics provide $\dot{\bar{v}}_{i_d} = (\alpha_{v_i} – \bar{v}_{i_d}) / \tau_v$, which is simple to compute, thus avoiding complexity explosion. This two-step process, replicated for the $y$ and $z$ channels using virtual controls for $\sin\chi_i$ and $\sin\gamma_i$, yields the full set of commands $(v_{i_c}, \chi_{i_c}, \gamma_{i_c})$ for each drone in the formation drone light show.

6. System Architecture & Parameter Selection

The integrated SMDO-DSC controller for a single drone in the formation drone light show operates as follows:

Module Function Key Inputs Outputs
SMDO Bank Estimates lumped disturbances $\bar{\omega}_{x_i}, \bar{\omega}_{y_i}, \bar{\omega}_{z_i}, \bar{\omega}_{v_i}, \bar{\omega}_{\gamma_i}, \bar{\omega}_{\chi_i}$ in real-time. Drone’s own state $(x_i, v_i, \gamma_i, \chi_i, …)$ Disturbance estimates $\hat{\bar{\omega}}_{(\cdot)}$
DSC Controller (X-Channel) Computes the speed command $v_{i_c}$ to enforce $x$-formation tracking. $e_{x_i}$, neighbor data, $\hat{\bar{\omega}}_{x_i}, \hat{\bar{\omega}}_{v_i}$ Command $v_{i_c}$
DSC Controller (Y-Channel) Computes the heading command $\chi_{i_c}$ to enforce $y$-formation tracking. $e_{y_i}$, neighbor data, $\hat{\bar{\omega}}_{y_i}, \hat{\bar{\omega}}_{\chi_i}$ Command $\chi_{i_c}$
DSC Controller (Z-Channel) Computes the flight path command $\gamma_{i_c}$ to enforce $z$-formation tracking. $e_{z_i}$, neighbor data, $\hat{\bar{\omega}}_{z_i}, \hat{\bar{\omega}}_{\gamma_i}$ Command $\gamma_{i_c}$
Low-Pass Filters Smooth the virtual controls $\alpha_{v_i}, \alpha_{\chi_i}, \alpha_{\gamma_i}$ to prevent differential explosion. Stabilizing functions $\alpha_{(\cdot)}$ Filtered states $\bar{v}_{i_d}, \bar{\chi}_{i_d}, \bar{\gamma}_{i_d}$

The stability of the closed-loop system can be proven via Lyapunov analysis, showing that all signals are uniformly ultimately bounded (UUB). The ultimate bound on the formation tracking error can be made arbitrarily small by appropriately selecting the controller and filter parameters. A guideline is:

Parameter Group Role Selection Guideline
SMDO Gains ($\lambda_1, \lambda_2$) Govern disturbance estimation convergence speed and accuracy. Higher values yield faster convergence but may increase sensitivity to measurement noise. Tune for a balance.
DSC Control Gains ($k_{1_i}, k_{2_i}, …$) Determine the rate of error surface convergence. Larger gains improve response speed but can lead to actuator saturation and high-frequency chatter. Start moderate and increase.
Filter Time Constants ($\tau_v, \tau_{\gamma}, \tau_{\chi}$) Determine the smoothing of virtual controls. Smaller values make filtered states track stabilizing functions more closely but very small values can induce numerical stiffness. Set significantly smaller than system’s dominant time constant.

7. Practical Considerations for Formation Drone Light Show Deployment

Implementing an SMDO-DSC controller in a real-world formation drone light show involves several practical steps:

  1. Communication Graph Design: The network topology (defined by the sets $\mathcal{N}_i$ and weights $a_{ij}$) must ensure that information from the virtual leader propagates to all drones. A common, robust topology for a formation drone light show is a spanning tree or a minimally connected graph to reduce communication load while maintaining connectivity.
  2. Distributed State Estimation: In practice, drones may not have direct access to neighbors’ full states (like $\dot{x}_j$). A distributed observer or the use of communicated state estimates is necessary to compute the terms in the stabilizing function $\alpha_{v_i}$.
  3. Actuator Limits and Smoothing: The control commands $v_{i_c}, \gamma_{i_c}, \chi_{i_c}$ must be constrained within the drone’s physical capabilities. Furthermore, the discontinuous $\text{sgn}()$ function in the SMDO can be replaced by a continuous approximation (e.g., $\text{sat}()$ or $\tanh()$) to mitigate chattering in the physical actuators, which is crucial for a stable formation drone light show.
  4. Collective Take-off, Landing, and Reconfiguration: The virtual structure framework naturally supports graceful formation maneuvers. Smooth transitions for the virtual leader’s trajectory $(x_0(t), y_0(t), z_0(t))$ and even time-varying desired offsets $\mathbf{p}_i^{body}(t)$ can be used to create dynamic shape morphing in the formation drone light show, all while the SMDO-DSC controller maintains robustness.

The synthesis of Sliding Mode Disturbance Observers and Dynamic Surface Control presents a powerful and practical solution for the demanding control requirements of a modern formation drone light show. The SMDO actively identifies and compensates for the inevitable uncertainties and disturbances—from wind to aerodynamic interference—that would otherwise distort the aerial image. Simultaneously, the DSC framework provides a systematic, computationally efficient way to design stable formation tracking laws without the analytical burden of traditional methods. This combination ensures that each autonomous agent in the swarm precisely tracks its assigned trajectory within the virtual structure, resulting in the breathtaking, fluid, and perfectly synchronized aerial displays that define the pinnacle of formation drone light show technology. As the scale and complexity of these displays grow, such advanced, robust, and distributed control algorithms will form the indispensable core of their operational success.

Scroll to Top