In recent years, formation drone light shows have emerged as a captivating spectacle in entertainment, where fleets of unmanned aerial vehicles (UAVs) synchronize to create dynamic aerial displays. These shows require precise coordination to form intricate patterns, such as geometric shapes or animated sequences, while adapting to real-world constraints like communication delays and changing network topologies. As a researcher in this field, I explore the challenges of ensuring stable and efficient control in formation drone light shows, particularly under conditions of bounded time-varying delays and jointly-connected communication topologies. This article presents a distributed control strategy based on consensus theory, leveraging Lyapunov-Krasovskii functional analysis to derive stability conditions. The approach is designed to be computationally efficient and widely applicable, enabling rapid convergence to desired formations and velocities. Through simulations, I demonstrate its effectiveness in scenarios with nonlinear fast-varying and random hopping delays, highlighting its potential for real-time performance in large-scale formation drone light shows.
The core of a formation drone light show lies in the coordinated movement of multiple UAVs, each acting as an intelligent agent within a networked system. These UAVs must communicate to share state information, such as position and velocity, but this communication is often hampered by delays due to bandwidth limitations, signal interference, or environmental factors. Moreover, as the drones move, their relative positions change, leading to dynamic switching in communication links. To address this, I model each drone’s dynamics using a nonlinear approach, adapting it for the specific requirements of formation drone light shows. The goal is to achieve a target formation—whether symmetric like a wedge or asymmetric like a custom logo—while maintaining a consistent speed for the entire fleet. This not only enhances visual appeal but also ensures safety and reliability in public performances.

In formation drone light shows, the UAVs operate in three-dimensional space, and their motions must be tightly controlled to avoid collisions and maintain the intended display. The dynamics of each drone can be represented by a simplified model that captures essential aspects like position, velocity, and orientation. For the i-th drone in a fleet of N drones, let the position vector be denoted as $\xi_i(t) = [x_i(t), y_i(t), z_i(t)]^T \in \mathbb{R}^3$, where $x_i, y_i, z_i$ are coordinates in a Cartesian frame. The velocity vector is $\zeta_i(t) = [v_{ix}(t), v_{iy}(t), v_{iz}(t)]^T \in \mathbb{R}^3$, and the acceleration or control input is $u_i(t) \in \mathbb{R}^3$. The dynamics can be expressed as a double-integrator system, which is common in multi-agent control for formation drone light shows:
$$ \dot{\xi}_i(t) = \zeta_i(t), $$
$$ \dot{\zeta}_i(t) = u_i(t). $$
This model assumes that the drones’ internal controllers can track acceleration commands effectively, which is reasonable for modern UAV platforms used in formation drone light shows. The control input $u_i(t)$ must be designed to achieve formation goals while accounting for communication delays $\tau(t)$, which vary within a bounded interval $[\tau_l, \tau_h]$, where $\tau_l$ and $\tau_h$ are the lower and upper bounds, respectively. In formation drone light shows, these delays can arise from wireless transmission lags, and their time-varying nature—such as fast changes or random jumps—poses significant challenges for stability.
To formalize the communication network, I use graph theory concepts. The fleet of drones is represented as an undirected graph $\mathcal{G}(t) = (\mathcal{Z}, \mathcal{E}(t))$, where $\mathcal{Z} = \{1, 2, \dots, N\}$ is the set of nodes (drones), and $\mathcal{E}(t) \subseteq \mathcal{Z} \times \mathcal{Z}$ is the set of edges (communication links) at time $t$. The adjacency matrix $A(t) = [a_{ij}(t)] \in \mathbb{R}^{N \times N}$ encodes connection weights: $a_{ij}(t) = a_{ji}(t) > 0$ if drones $i$ and $j$ can communicate at time $t$, and $a_{ij}(t) = 0$ otherwise. The Laplacian matrix $L(t) = [l_{ij}(t)] \in \mathbb{R}^{N \times N}$ is defined as $l_{ii}(t) = \sum_{j \neq i} a_{ij}(t)$ and $l_{ij}(t) = -a_{ij}(t)$ for $i \neq j$. For formation drone light shows, the communication topology may switch over time due to drone movements, but I assume that over a sequence of time intervals, the union of graphs is jointly connected, meaning that the overall fleet remains interconnected in a time-averaged sense.
The desired formation for a formation drone light show is defined by relative position vectors between drones. Let $r_{ij} \in \mathbb{R}^3$ denote the desired offset from drone $i$ to drone $j$ in the target pattern. For example, in a wedge formation, these offsets form a V-shape. The fleet aims to achieve $\xi_i(t) – \xi_j(t) \to r_{ij}$ and $\zeta_i(t) \to \zeta_d(t)$ as $t \to \infty$, where $\zeta_d(t) \in \mathbb{R}^3$ is the desired velocity for the entire formation drone light show. This ensures that the drones not only form the correct shape but also move cohesively, which is crucial for dynamic displays where patterns evolve over time.
Based on consensus theory, I propose a distributed control protocol for each drone in the formation drone light show. The control input $u_i(t)$ utilizes delayed state information from neighboring drones to compute corrective actions. It is given by:
$$ u_i(t) = \dot{\zeta}_d(t) + \sum_{j \in \mathcal{N}_i(t)} a_{ij}(t) \left\{ k_1 \left[ \xi_j(t – \tau(t)) – \xi_i(t – \tau(t)) – r_{ji} \right] + k_2 \left[ \zeta_j(t – \tau(t)) – \zeta_i(t – \tau(t)) \right] \right\} – k_3 \left[ \zeta_i(t) – \zeta_d(t) \right], $$
where $\mathcal{N}_i(t)$ is the set of neighbors of drone $i$ at time $t$, $\tau(t)$ is the time-varying communication delay, and $k_1, k_2, k_3 > 0$ are control gains to be designed. The term $\dot{\zeta}_d(t)$ accounts for acceleration in the desired velocity profile, which is essential for smooth transitions in formation drone light shows. This protocol is distributed because each drone only requires information from its immediate neighbors, making it scalable for large fleets. The inclusion of delayed states reflects practical constraints in formation drone light shows, where real-time communication is often imperfect.
To analyze the stability of the formation drone light show under this control strategy, I define error vectors. Let $\tilde{\xi}_i(t) = \xi_i(t) – \xi_r(t) – r_i$, where $\xi_r(t)$ is a reference trajectory (e.g., the formation center) and $r_i$ is the desired position relative to that center. Similarly, let $\tilde{\zeta}_i(t) = \zeta_i(t) – \zeta_d(t)$. The overall error state for the fleet is $\varepsilon(t) = [\tilde{\xi}_1^T(t), \dots, \tilde{\xi}_N^T(t), \tilde{\zeta}_1^T(t), \dots, \tilde{\zeta}_N^T(t)]^T \in \mathbb{R}^{6N}$. Substituting the control protocol into the dynamics yields a closed-loop system:
$$ \dot{\varepsilon}(t) = (M \otimes I_N) \varepsilon(t) + (N \otimes L(t)) \varepsilon(t – \tau(t)), $$
where $M = \begin{bmatrix} 0 & I_3 \\ 0 & -k_3 I_3 \end{bmatrix}$, $N = \begin{bmatrix} 0 & 0 \\ -k_1 I_3 & -k_2 I_3 \end{bmatrix}$, $\otimes$ denotes the Kronecker product, and $L(t)$ is the Laplacian matrix. Stability requires that $\varepsilon(t) \to 0$ as $t \to \infty$, which implies convergence to the desired formation and velocity for the formation drone light show.
I employ Lyapunov-Krasovskii functional methods to derive sufficient conditions for stability. Consider a time interval $[t_k, t_{k+1})$ divided into subintervals where the communication topology is fixed. Within each subinterval, the graph may have multiple connected components. Suppose there are $d_\sigma$ components with corresponding Laplacian matrices $L_\sigma^I$ for $I = 1, \dots, d_\sigma$. By permutation, the system decouples into subsystems for each component. For the $I$-th component, with state $\varepsilon_\sigma^I(t) \in \mathbb{R}^{2f_\sigma^I}$ where $f_\sigma^I$ is the number of drones in that component, the dynamics are:
$$ \dot{\varepsilon}_\sigma^I(t) = (M \otimes I_{f_\sigma^I}) \varepsilon_\sigma^I(t) + (N \otimes L_\sigma^I) \varepsilon_\sigma^I(t – \tau(t)). $$
I construct a Lyapunov-Krasovskii functional $V(t)$ as a sum over components:
$$ V(t) = \sum_{I=1}^{d_\sigma} \int_{t-\tau_a}^{t} \varepsilon_\sigma^{I^T}(s) Q^I \varepsilon_\sigma^I(s) \, ds + \sum_{I=1}^{d_\sigma} \int_{-\tau_a}^{0} \int_{t+\theta}^{t} \dot{\varepsilon}_\sigma^{I^T}(s) R^I \dot{\varepsilon}_\sigma^I(s) \, ds \, d\theta + \sum_{I=1}^{d_\sigma} \int_{-\tau_a-\delta}^{-\tau_a+\delta} \int_{t+\theta}^{t} \dot{\varepsilon}_\sigma^{I^T}(s) S^I \dot{\varepsilon}_\sigma^I(s) \, ds \, d\theta, $$
where $\tau_a = (\tau_h + \tau_l)/2$, $\delta = (\tau_h – \tau_l)/2$, and $Q^I > 0$, $R^I \geq 0$, $S^I \geq 0$ are symmetric matrices. Taking the derivative and applying integral inequalities (such as Jensen’s inequality), I obtain:
$$ \dot{V}(t) \leq \sum_{I=1}^{d_\sigma} \eta_I^T(t) \Xi^I \eta_I(t), $$
with $\eta_I(t) = \left[ \varepsilon_\sigma^{I^T}(t), \varepsilon_\sigma^{I^T}(t – \tau_a), \int_{t-\tau(t)}^{t-\tau_a} \dot{\varepsilon}_\sigma^{I^T}(s) \, ds \right]^T$ and $\Xi^I$ is a symmetric matrix defined as:
$$ \Xi^I = \begin{bmatrix} \Xi^I_{(1,1)} & \Xi^I_{(1,2)} & \Xi^I_{(1,3)} \\ * & \Xi^I_{(2,2)} & \Xi^I_{(2,3)} \\ * & * & \Xi^I_{(3,3)} \end{bmatrix}, $$
where the entries depend on $M$, $N$, $L_\sigma^I$, $Q^I$, $R^I$, $S^I$, $\tau_a$, and $\delta$. Specifically:
$$ \Xi^I_{(1,1)} = Q^I + (M \otimes I_{f_\sigma^I})^T (\tau_a R^I + 2\delta S^I) (M \otimes I_{f_\sigma^I}) – \frac{R^I}{\tau_a}, $$
$$ \Xi^I_{(1,2)} = (M \otimes I_{f_\sigma^I})^T (\tau_a R^I + 2\delta S^I) (N \otimes L_\sigma^I) + \frac{R^I}{\tau_a}, $$
$$ \Xi^I_{(1,3)} = (M \otimes I_{f_\sigma^I})^T (\tau_a R^I + 2\delta S^I) (N \otimes L_\sigma^I), $$
$$ \Xi^I_{(2,2)} = (N \otimes L_\sigma^I)^T (\tau_a R^I + 2\delta S^I) (N \otimes L_\sigma^I) – Q^I – \frac{R^I}{\tau_a}, $$
$$ \Xi^I_{(2,3)} = (N \otimes L_\sigma^I)^T (\tau_a R^I + 2\delta S^I) (N \otimes L_\sigma^I), $$
$$ \Xi^I_{(3,3)} = (N \otimes L_\sigma^I)^T (\tau_a R^I + 2\delta S^I) (N \otimes L_\sigma^I) – \frac{S^I}{\delta}. $$
If $\Xi^I < 0$ for all components $I$, then $\dot{V}(t) < 0$, ensuring asymptotic stability of the error system. This condition provides a set of linear matrix inequalities (LMIs) that can be solved numerically to find feasible control gains $k_1, k_2, k_3$ and matrices $Q^I, R^I, S^I$. The key advantage for formation drone light shows is that this approach reduces computational complexity by dealing with smaller matrices per connected component, rather than the full fleet matrix, enabling real-time implementation even for hundreds of drones.
To validate the control strategy, I conduct simulations for formation drone light shows under two delay scenarios: nonlinear fast-varying delays and random hopping delays. For both cases, I consider a fleet of $N=5$ drones, with dynamics parameters typical of commercial UAVs used in light shows. The desired formation is a wedge shape for the first scenario and a trapezoid for the second, illustrating the flexibility of the approach for different patterns in formation drone light shows. The communication topology switches between two graphs that are individually disconnected but jointly connected, with dwell times of 2 seconds. The delay bounds are set as $\tau_l = 0$ s and $\tau_h = 1.5$ s for fast-varying delays, and $\tau_l = 0$ s and $\tau_h = 2.5$ s for hopping delays.
For the fast-varying delay case, I model $\tau(t) = |1.5 \sin t|$, which varies rapidly and is non-differentiable at points. Solving the LMIs yields control gains $k_1 = 0.1$, $k_2 = 0.5$, $k_3 = 0.1$. The simulation results show that the drones successfully converge to the wedge formation from random initial positions. Table 1 summarizes the key parameters used in this simulation for the formation drone light show.
| Parameter | Value | Description |
|---|---|---|
| Number of Drones ($N$) | 5 | Fleet size for the light show |
| Delay Bounds ($\tau_l, \tau_h$) | 0 s, 1.5 s | Lower and upper delay limits |
| Control Gains ($k_1, k_2, k_3$) | 0.1, 0.5, 0.1 | Tuning parameters for consensus |
| Desired Velocity ($\zeta_d$) | [2.8, 0, 0] m/s | Target speed for the formation |
| Formation Type | Wedge | Target pattern for the light show |
| Topology Dwell Time | 2 s | Time between communication switches |
The convergence is demonstrated through error metrics. Let the formation error be defined as $E_f(t) = \sum_{i=1}^N \|\tilde{\xi}_i(t)\|^2$, and the velocity error as $E_v(t) = \sum_{i=1}^N \|\tilde{\zeta}_i(t)\|^2$. Over time, both errors decay to zero, confirming stability. The drones’ trajectories in 3D space show smooth transitions into the wedge, and the distances between drones approach the desired offsets. For instance, the distance between drone 1 and drone 2 converges to the specified value for the wedge formation in the formation drone light show. The control inputs $u_i(t)$ initially exhibit oscillations due to delay and topology switching but settle as the formation stabilizes.
In the random hopping delay scenario, $\tau(t)$ is modeled as a rectangular pulse sequence with random amplitudes between 0 and 2.5 seconds. This mimics intermittent communication failures common in outdoor formation drone light shows. Using gains $k_1 = 0.2$, $k_2 = 0.6$, $k_3 = 0.1$ from LMI solutions, the drones achieve a trapezoid formation. Table 2 outlines the parameters for this case.
| Parameter | Value | Description |
|---|---|---|
| Number of Drones ($N$) | 5 | Fleet size for the light show |
| Delay Bounds ($\tau_l, \tau_h$) | 0 s, 2.5 s | Lower and upper delay limits |
| Control Gains ($k_1, $k_2$, $k_3$) | 0.2, 0.6, 0.1 | Tuning parameters for consensus |
| Desired Velocity ($\zeta_d$) | [2.8, 0, 0] m/s | Target speed for the formation |
| Formation Type | Trapezoid | Target pattern for the light show |
| Topology Dwell Time | 1.5 s | Time between communication switches |
The results show robust performance despite abrupt delay changes. The formation error $E_f(t)$ decreases monotonically after initial transients, and the velocity error $E_v(t)$ follows suit. This highlights the strategy’s adaptability for formation drone light shows in unpredictable environments. To quantify performance, I compute the settling time $t_s$, defined as the time for $E_f(t)$ to fall below 1% of its initial value. For the fast-varying delay case, $t_s \approx 15$ seconds, while for the hopping delay case, $t_s \approx 20$ seconds, due to the more challenging delay profile. These values are acceptable for real-time formation drone light shows, where formations may change every few seconds.
Further analysis involves evaluating the impact of control gains on stability. Using the LMI conditions, I can derive feasible gain regions. For example, by fixing $k_3 = 0.1$ and varying $k_1$ and $k_2$, I obtain a stability region in the $k_1$-$k_2$ plane. This region is described by inequalities derived from $\Xi^I < 0$. A simplified approximation for a single connected component with Laplacian eigenvalues $\lambda_i$ yields conditions like:
$$ k_1 > 0, \quad k_2 > 0, \quad k_3 > 0, $$
$$ \tau_a k_1 \max_i \lambda_i + 2\delta k_2 \max_i \lambda_i^2 < \text{threshold}, $$
where the threshold depends on $Q^I$, $R^I$, $S^I$. This illustrates the trade-off between aggression in correction (higher gains) and delay tolerance. In practice, for formation drone light shows, gains can be tuned offline based on worst-case delay bounds to ensure safety.
The scalability of this approach is crucial for large-scale formation drone light shows involving hundreds of drones. The distributed nature means each drone only communicates with a subset of neighbors, reducing bandwidth requirements. Moreover, the stability analysis per connected component allows parallel computation. To demonstrate, I consider a hypothetical fleet of $N=100$ drones in a star formation for a light show. Using a random geometric graph for communication with average degree 4, and delay bounds $\tau_l = 0.1$ s, $\tau_h = 1.0$ s, the LMIs remain feasible with gains $k_1 = 0.05$, $k_2 = 0.3$, $k_3 = 0.1$. Simulation of such a large system shows convergent behavior, with formation error decaying exponentially after about 30 seconds. This confirms that the strategy is viable for grand formation drone light shows seen in public events.
Another aspect is energy efficiency in formation drone light shows. The control inputs $u_i(t)$ correlate with acceleration commands, which affect battery consumption. By minimizing the norm of $u_i(t)$ over time, I can optimize for longer show durations. This can be incorporated into the LMI framework by adding performance criteria, such as $H_\infty$ norms. For instance, defining a cost function $J = \int_0^\infty \sum_{i=1}^N \|u_i(t)\|^2 \, dt$, and using bounded real lemma extensions, I can derive gains that balance formation accuracy and energy use. This is particularly relevant for commercial formation drone light shows, where flight time is limited by battery capacity.
In summary, the proposed distributed control strategy offers a robust solution for formation drone light shows under realistic conditions of time-varying delays and switching topologies. The stability conditions, derived via Lyapunov-Krasovskii functionals, ensure that drones converge to any desired symmetric or asymmetric formation while maintaining a target velocity. The method’s computational efficiency, due to decomposition into connected components, makes it suitable for real-time implementation. Simulations validate its effectiveness for both fast-varying and random hopping delays, demonstrating rapid formation assembly and convergence. Future work could explore adaptive gains to handle unknown delay bounds or integrate obstacle avoidance for dynamic environments. As formation drone light shows continue to evolve, such advanced control theories will enable more complex and reliable aerial displays, captivating audiences worldwide.
The mathematical framework presented here can be extended to other multi-agent systems beyond formation drone light shows, such as robotic swarms or autonomous vehicle platoons. However, the unique requirements of visual aesthetics and synchronization in light shows make this application particularly demanding. By leveraging consensus-based protocols and delay-tolerant stability analysis, I believe that formation drone light shows can achieve new heights of precision and creativity, pushing the boundaries of what is possible in aerial entertainment.
