The evolution of aerial performances has been profoundly marked by the advent of drone light shows. These synchronized spectacles, where swarms of unmanned aerial vehicles (UAVs) equipped with LEDs create dynamic, luminous patterns in the night sky, have captivated audiences at events ranging from global ceremonies to local festivals. However, as audience expectations grow, there is an increasing demand for more than just pre-programmed sequences. The future of drone light show technology lies in real-time interactivity and personalization—allowing performances to adapt instantly to crowd input or create unique, on-demand displays. This shift presents a formidable core challenge: the rapid generation of safe, efficient, and collision-free flight trajectories for potentially hundreds of drones during complex formation changes.
Traditional methods for multi-agent trajectory planning often struggle with the scalability and speed required for an interactive drone light show. Centralized optimization techniques, while offering theoretical guarantees, become computationally intractable for large swarms. Decoupled or sequential methods improve speed but often lead to suboptimal, inefficient paths or reduced success rates in dense configurations. This paper addresses this gap by formulating the formation transition as a distributed optimization problem and proposing a novel algorithm based on Distributed Model Predictive Control (DMPC). Our primary contribution is a fast, scalable trajectory generation method that introduces an on-demand, soft-constrained collision avoidance strategy within a DMPC framework, combined with intelligent goal assignment, to meet the stringent requirements of real-time interactive performances.
System Modeling for Interactive Drone Light Shows
An interactive drone light show system operates as a sophisticated cyber-physical system. The core workflow involves a ground control station interpreting user or audience input (e.g., a new shape request) into a set of target waypoints for the swarm. A trajectory generation algorithm must then swiftly compute individual flight paths for each drone. Finally, the drone swarm executes these trajectories with high precision to render the desired aerial image. The system’s responsiveness hinges entirely on the speed and reliability of the trajectory planner.
The fundamental problem is a point-to-point multi-drone transition. Given N drones, we define the state of drone \( i \) at discrete time step \( k \) by its position \( \mathbf{p}_i[k] \in \mathbb{R}^3 \), velocity \( \mathbf{v}_i[k] \in \mathbb{R}^3 \), and acceleration \( \mathbf{a}_i[k] \in \mathbb{R}^3 \). Using a double-integrator model with discretization time \( h \), the kinematics are:
$$
\mathbf{p}_i[k+1] = \mathbf{p}_i[k] + h\mathbf{v}_i[k] + \frac{h^2}{2}\mathbf{a}_i[k]
$$
$$
\mathbf{v}_i[k+1] = \mathbf{v}_i[k] + h\mathbf{a}_i[k]
$$
Drones are subject to physical constraints reflecting their operational limits:
$$
\mathbf{a}_{min} \preceq \mathbf{a}_i[k] \preceq \mathbf{a}_{max}, \quad \mathbf{p}_{min} \preceq \mathbf{p}_i[k] \preceq \mathbf{p}_{max}
$$
Most critically, collisions must be avoided. To account for the downwash effect of rotors, a safety region around each drone is modeled as an ellipsoid. The collision avoidance constraint between drones \( i \) and \( j \) is:
$$
\| \Theta^{-1} ( \mathbf{p}_i[k] – \mathbf{p}_j[k] ) \|_n \geq r_{min}
$$
Here, \( \Theta = \text{diag}(1, 1, c) \) is a scaling matrix (with \( c > 1 \)), \( r_{min} \) is the minimum horizontal separation, and the vertical safety distance becomes \( r_{z,min} = c \cdot r_{min} \). The norm degree \( n \) is typically 2.

The ground control system architecture for a drone light show is illustrated above. The trajectory planner is the critical middleware, translating creative intent into safe, flyable paths. Our proposed DMPC-based planner fits into this architecture, taking initial and target waypoints as input and outputting the full state trajectory (position, velocity, acceleration) for every drone in the fleet.
Distributed Model Predictive Control (DMPC) Framework
Model Predictive Control (MPC) is an ideal candidate for this offline trajectory generation. In a standard MPC formulation, an optimization problem is solved at each time step over a finite prediction horizon \( K \), and the first control input is applied. For offline planning, we apply this principle iteratively to the drone model itself, propagating the state forward without a physical vehicle. Distributed MPC (DMPC) decomposes this large, centralized optimization into smaller, coupled problems solved by each agent (drone) in parallel, significantly reducing computational burden—a vital feature for a scalable drone light show.
Synchronous Algorithm Structure
Our algorithm employs a synchronous DMPC scheme. At each discrete planning step \( k_t \), every drone simultaneously executes the following sequence:
- Communication: Receive the predicted state sequences from all other drones, computed in the previous step \( k_t-1 \).
- Collision Risk Detection: Analyze these predictions to identify potential future conflicts within the prediction horizon.
- Local QP Construction & Solution: Build and solve a local Quadratic Program (QP). Crucially, collision constraints are added only if a conflict is detected (on-demand).
- State Propagation & Broadcast: Apply the first optimal control input to its own dynamics model to move one step forward. Broadcast its newly computed predicted state sequence for the next iteration.
This process repeats until all drones converge to their target positions. The core innovation lies in steps 2 and 3—the on-demand, soft-constrained handling of collisions.
Prediction Model and Objective Function
For drone \( i \), we define the state vector \( \mathbf{x}_i = [\mathbf{p}_i^T, \mathbf{v}_i^T]^T \) and control input \( \mathbf{u}_i = \mathbf{a}_i \). The discrete linear dynamics over the prediction horizon can be compactly written as an affine function of the initial state \( \mathbf{X}_{0,i} \) and the input sequence \( \mathbf{U}_i \in \mathbb{R}^{3K} \):
$$
\mathbf{P}_i = \mathbf{A}_0 \mathbf{X}_{0,i} + \boldsymbol{\Lambda} \mathbf{U}_i
$$
where \( \mathbf{P}_i \) is the stacked position sequence, and \( \boldsymbol{\Lambda} \) is a block lower-triangular matrix mapping inputs to positions.
The local objective function \( J_i(\mathbf{U}_i) \) for each drone has three components designed for an efficient drone light show:
1. Terminal Cost: Penalizes deviation from the target \( \mathbf{p}_{d,i} \) in the last \( \kappa \) steps of the horizon, encouraging goal arrival.
$$
J_{e,i} = \mathbf{U}_i^T (\boldsymbol{\Lambda}^T \tilde{\mathbf{Q}} \boldsymbol{\Lambda}) \mathbf{U}_i – 2(\mathbf{P}_{d,i}^T \tilde{\mathbf{Q}} \boldsymbol{\Lambda} – (\mathbf{A}_0 \mathbf{X}_{0,i})^T \tilde{\mathbf{Q}} \boldsymbol{\Lambda}) \mathbf{U}_i + \text{constant}
$$
2. Control Effort Cost: Minimizes acceleration magnitude for energy efficiency.
$$
J_{u,i} = \mathbf{U}_i^T \tilde{\mathbf{R}} \mathbf{U}_i
$$
3. Input Smoothness Cost: Penalizes changes in acceleration (\( \Delta \mathbf{U}_i \)) for smooth, stable flight.
$$
J_{\delta,i} = \mathbf{U}_i^T (\boldsymbol{\Delta}^T \tilde{\mathbf{S}} \boldsymbol{\Delta}) \mathbf{U}_i – 2(\mathbf{U}_{i,*}^T \tilde{\mathbf{S}} \boldsymbol{\Delta}) \mathbf{U}_i
$$
The matrices \( \tilde{\mathbf{Q}}, \tilde{\mathbf{R}}, \tilde{\mathbf{S}} \) are positive definite, block-diagonal weighting matrices. The total cost is \( J_i = J_{e,i} + J_{u,i} + J_{\delta,i} \).
On-Demand Collision Avoidance with Soft Constraints
This is the key mechanism enabling fast planning. A drone \( i \) only considers collision constraints with a neighbor \( j \) if a conflict is predicted within the horizon, i.e., if \( \xi_{ij} = \|\Theta^{-1}(\hat{\mathbf{p}}_i[k_c|k_t-1] – \hat{\mathbf{p}}_j[k_c|k_t-1])\|_n < f(r_{min}) \), where \( k_c \) is the predicted collision time and \( f(r_{min}) \) defines a detection buffer.
Instead of a hard constraint that could render the QP infeasible, we employ a soft constraint using a slack variable \( \epsilon_{ij} \leq 0 \):
$$
\| \Theta^{-1}( \hat{\mathbf{p}}_i[k_c|k_t] – \hat{\mathbf{p}}_j[k_c|k_t-1] ) \|_n \geq r_{min} + \epsilon_{ij}
$$
This constraint is linearized around the previous prediction to become an affine constraint in the decision variable. For multiple conflicting neighbors in set \( \Omega_i \), these constraints are stacked:
$$
\mathbf{A}_{col} \bar{\mathbf{U}}_i \leq \mathbf{b}_{col}
$$
where \( \bar{\mathbf{U}}_i = [\mathbf{U}_i^T, \boldsymbol{\epsilon}_i^T]^T \) is the augmented decision vector including the slack variables. A penalty term \( \zeta \boldsymbol{\epsilon}_i^T \boldsymbol{\epsilon}_i \) is added to the cost function to discourage constraint violation. The final augmented QP for an agent detecting a collision is:
$$
\begin{aligned}
\min_{\bar{\mathbf{U}}_i} \quad & J_i(\mathbf{U}_i) + \bar{\mathbf{U}}_i^T \mathbf{H}_{\epsilon} \bar{\mathbf{U}}_i – \mathbf{f}_{\epsilon}^T \bar{\mathbf{U}}_i \\
\text{subject to} \quad & \mathbf{A}_{in,aug} \bar{\mathbf{U}}_i \leq \mathbf{b}_{in,aug}
\end{aligned}
$$
If no collision is predicted, the drone simply solves the standard QP without the collision constraints \( (\mathbf{A}_{col}, \mathbf{b}_{col}) \) and slack variables, which is much faster.
Intelligent Target Point Assignment
To further enhance the efficiency of a drone light show formation change, we treat drones as unlabeled. This means any drone can be assigned to any final target position within the desired shape. The goal is to find the assignment that minimizes the total travel distance and inherently reduces the likelihood of path crossings and conflicts.
We model the relative position between drones \( i \) and \( j \) as \( \mathbf{s}_{ij}(k) = \mathbf{p}_j(k) – \mathbf{p}_i(k) \). Let \( \mathbf{u}_{ij} = \mathbf{s}_{ij}(0) \) be the initial relative position and \( \mathbf{w}_{ij} \) be the desired relative position at the target formation. The core idea of our assignment algorithm is to analyze the dot product \( \mathbf{u}_{ij}^T \mathbf{w}_{ij} \).
- If \( \mathbf{u}_{ij}^T \mathbf{w}_{ij} \geq 0 \), the relative orientation between drones \( i \) and \( j \) does not change drastically; no need to swap their goals.
- If \( \mathbf{u}_{ij}^T \mathbf{w}_{ij} < 0 \), the drones would effectively cross paths during a direct assignment. Swapping their target positions is beneficial.
The algorithm iteratively swaps target labels for drone pairs with \( \mathbf{u}_{ij}^T \mathbf{w}_{ij} < 0 \), which monotonically decreases the total sum of squared relative position errors \( \sum_{i,j} \|\mathbf{u}_{ij} – \mathbf{w}_{ij}\|^2 \). This process converges to an efficient assignment, simplifying the subsequent trajectory planning task for the DMPC algorithm and leading to shorter, safer overall paths for the drone light show.
Performance Evaluation and Discussion
The proposed DMPC algorithm with intelligent assignment was evaluated through extensive simulations and compared against state-of-the-art methods like Sequential Convex Programming (SCP), both centralized and decoupled versions.
Success Rate and Computational Speed
Simulations were run with increasing swarm sizes in a fixed density environment. The success rate measures the planner’s ability to find a collision-free trajectory for all drones within a specified time limit.
| Method | Success Rate (N=50) | Success Rate (N=150) | Avg. Computation Time (s) |
|---|---|---|---|
| Centralized SCP | ~100% | ~100% | High (>100s) |
| Decoupled SCP | ~85% | < 50% | Medium (~10s) | Proposed DMPC (Soft Constraints) | > 98% | > 75% | Low (~1.5s) |
The results clearly show that our DMPC method maintains a high success rate even for large swarms, while its computation time is orders of magnitude lower than centralized SCP and significantly better than decoupled SCP. The on-demand strategy with soft constraints is crucial; a version with hard constraints failed frequently due to infeasible QPs in dense scenarios.
Trajectory Optimality and Efficiency
While DMPC is a distributed and thus suboptimal method, its trajectories are highly efficient. We measure the total flight distance of the swarm for a formation change. The following table compares the relative flight distance (normalized to the centralized SCP solution, which is near-optimal).
| Swarm Density (drones/m³) | DMPC (No Assignment) | DMPC (With Intelligent Assignment) |
|---|---|---|
| Low (0.5) | 1.08 | 1.02 |
| Medium (2.0) | 1.15 | 1.06 |
| High (4.0) | 1.22 | 1.09 |
The intelligent target assignment provides a consistent improvement, reducing the total flight distance by approximately 10-15% across densities. This directly translates to lower energy consumption and allows for faster completion of formation transitions in an interactive drone light show.
Practical Performance and Feasibility
The DMPC formulation inherently handles state and input constraints. Tuning parameters like the terminal cost horizon \( \kappa \) and the slack penalty weight \( \zeta \) allows a show designer to balance aggression and safety. A higher \( \kappa \) makes drones head to their goals more directly but may cause overshoot; a higher \( \zeta \) makes collision avoidance more conservative.
$$
\kappa_{\text{high}}, \zeta_{\text{low}} \rightarrow \text{Aggressive, fast transitions}
$$
$$
\kappa_{\text{low}}, \zeta_{\text{high}} \rightarrow \text{Conservative, smooth, safe transitions}
$$
This tunability is essential for adapting the algorithm to different drone light show environments, whether in a tightly controlled arena or a more dynamic, open space.
Conclusion
This paper presented a comprehensive solution for generating real-time trajectories for interactive drone light shows. By framing the formation change problem within a Distributed Model Predictive Control (DMPC) framework, we achieve the scalability necessary for large swarms. The introduction of an on-demand collision avoidance strategy with soft constraints ensures both computational efficiency and a high success rate. Furthermore, coupling this with an intelligent, unlabeled target assignment algorithm significantly improves the overall flight efficiency of the swarm.
The proposed system meets the core requirements for the next generation of drone light show technology: it is fast enough to allow real-time audience interaction, scalable to handle impressive numbers of drones, and reliable in producing safe, energy-efficient flight paths. Future work may focus on integrating robust communication protocols for the DMPC data exchange, accounting for more detailed aerodynamic disturbances in dense formations, and developing higher-level user interfaces that seamlessly translate creative input into the target waypoints processed by this robust trajectory planner.
