Design and Implementation of a Novel Control System for Cost-Effective Drone Light Shows

The increasing popularity of aerial performances has positioned coordinated drone light shows as a significant new direction for the development of small unmanned aerial vehicles (UAVs). Traditionally, these captivating displays rely on hundreds or even thousands of drones, each equipped with multi-colored LED lights, forming a dynamic, luminous point cloud in the night sky. Through precise spatio-temporal programming, the on/off state and color of each individual LED are controlled to create complex animated graphics and patterns. While visually stunning, this conventional paradigm for drone light shows faces several inherent challenges. The most prominent is the exceptionally high cost associated with manufacturing, maintaining, and operating such a massive fleet. Furthermore, the system’s robustness is constantly tested; the failure of a single drone (e.g., due to a crash, fly-away, or communication dropout) can create a visible gap in the formation, degrading the overall visual integrity. The logistical complexity of managing, charging, and transporting hundreds of units for a single performance also presents substantial operational hurdles.

This article presents the design and implementation of a novel, cost-reducing control system architecture for drone light shows. The core innovation lies in drastically reducing the number of aerial carrier platforms required. Instead of using one drone per LED point, the proposed system employs a minimal formation of just four UAVs to delineate a three-dimensional volumetric space. These drones carry not simple point-source LEDs, but elongated, addressable LED light strips along their structural members. By precisely controlling the illumination of specific pixels along these strips in concert with the drones’ spatial positioning, a high-resolution 3D LED lattice is effectively synthesized within the defined volume. This approach decouples the number of display elements from the number of drones, offering a potentially revolutionary reduction in the cost and complexity of large-scale aerial displays.

System Architecture and Principle of Operation

The overarching system employs a two-layer hierarchical control architecture to manage both the macro-positioning of the drone carriers and the micro-illumination of the LED lattice.

Layer 1: Drone Formation Control & Command Forwarding. This layer encompasses the ground control station (GCS) and the flight control systems onboard each UAV. The primary responsibilities are:

  1. Precise Formation Flying: Maintaining the four drones at the precise vertices of a predefined (and potentially dynamic) polygon that defines the bounding box of the display volume.
  2. Differential GPS Positioning: Utilizing Real-Time Kinematic (RTK) or similar differential GNSS techniques to achieve centimeter-level relative positioning accuracy between drones, which is critical for a stable lattice.
  3. Command Gateway: Acting as a wireless relay, receiving high-level animation commands from the GCS and forwarding specific LED control data streams to the onboard FPGA controller on each drone.

Layer 2: Volumetric LED Lattice Control. This layer resides on each drone and consists of an FPGA-based controller and the connected addressable LED strips. Its functions are:

  1. Command Processing: Receiving and interpreting the LED control data stream forwarded from the drone’s flight controller.
  2. Pixel Mapping & Timing: Mapping the global 3D animation frame data to the specific LED pixels located on the drone’s carried strips, generating the precise timing signals required by the LED protocol (e.g., WS2812B).
  3. Signal Driving: Providing sufficient current to drive long chains of LEDs reliably.

The synergy between these two layers is fundamental. Layer 1 defines the coordinate system and its stability, while Layer 2 populates this coordinate system with light. The control flow can be summarized as: GCS (Animation Script) -> Drone Flight Controller (Positioning & Forwarding) -> Onboard FPGA (Pixel Control) -> LED Strip.

Precision Drone Positioning via Differential GPS

A stable spatial reference frame is the foundation of this drone light show system. Standard GPS positioning, with its typical meter-level error, is insufficient. Therefore, a carrier-phase differential GPS (CDGPS) approach is adopted to achieve the required sub-decimeter relative accuracy between the drone vertices.

The system comprises a fixed, ground-based reference station with known coordinates and the UAVs acting as mobile rovers. The reference station calculates error corrections by comparing its known position with its GPS-measured position. These corrections, which account for common-mode errors like satellite clock drift and atmospheric delays, are transmitted to the rover drones.

The pseudorange measurement \(\rho\) from a satellite \(s\) to a receiver \(r\) at time \(t\) can be modeled as:
$$\rho_{r}^{s}(t) = R_{r}^{s}(t) + c \cdot [\delta t_r(t) – \delta t^s(t)] + I_{r}^{s}(t) + T_{r}^{s}(t) + \epsilon_{\rho}$$
where:

  • \(R_{r}^{s}(t)\) is the true geometric range.
  • \(c\) is the speed of light.
  • \(\delta t_r(t)\) and \(\delta t^s(t)\) are the receiver and satellite clock biases, respectively.
  • \(I_{r}^{s}(t)\) and \(T_{r}^{s}(t)\) are the ionospheric and tropospheric delays.
  • \(\epsilon_{\rho}\) encompasses other errors like multipath and receiver noise.

For the reference station \(A\) (known position) and a rover drone \(B\), the pseudorange measurements to the same satellite \(s\) are:
$$\rho_{A}^{s} = R_{A}^{s} + c \cdot (\delta t_A – \delta t^s) + I_{A}^{s} + T_{A}^{s} + \epsilon_{A}$$
$$\rho_{B}^{s} = R_{B}^{s} + c \cdot (\delta t_B – \delta t^s) + I_{B}^{s} + T_{B}^{s} + \epsilon_{B}$$

The reference station computes a correction \(\Delta \rho^{s}\) for satellite \(s\):
$$\Delta \rho^{s} = R_{A}^{s} – \rho_{A}^{s} = -c \cdot (\delta t_A – \delta t^s) – I_{A}^{s} – T_{A}^{s} – \epsilon_{A}$$

This correction is sent to rover \(B\), which applies it to form a corrected pseudorange:
$$\tilde{\rho}_{B}^{s} = \rho_{B}^{s} + \Delta \rho^{s} = R_{B}^{s} + c \cdot (\delta t_B – \delta t_A) + (I_{B}^{s} – I_{A}^{s}) + (T_{B}^{s} – T_{A}^{s}) + (\epsilon_{B} – \epsilon_{A})$$

For a baseline distance between \(A\) and \(B\) less than ~10-20 km, the atmospheric delays are highly correlated, so \((I_{B}^{s} – I_{A}^{s}) \approx 0\) and \((T_{B}^{s} – T_{A}^{s}) \approx 0\). The dominant remaining error is the differential receiver clock bias, \(d_{AB} = c \cdot (\delta t_B – \delta t_A)\), which is common across all satellites. The observation equation for rover \(B\) after correction becomes:
$$\tilde{\rho}_{B}^{s} = R_{B}^{s} + d_{AB} + \nu^{s}$$
where \(\nu^{s} = \epsilon_{B} – \epsilon_{A}\) is the differential measurement noise.

The geometric range \(R_{B}^{s}\) is a function of the rover’s unknown coordinates \(\mathbf{X}_B = (X_B, Y_B, Z_B)^T\) and the satellite’s known coordinates \(\mathbf{X}^s = (X^s, Y^s, Z^s)^T\):
$$R_{B}^{s} = \| \mathbf{X}^s – \mathbf{X}_B \| = \sqrt{(X^s – X_B)^2 + (Y^s – Y_B)^2 + (Z^s – Z_B)^2}$$

Linearizing this equation around an approximate rover position \(\mathbf{X}_B^0\) leads to the design matrix for a least-squares estimation. With observations to \(n\) satellites (\(n \geq 4\)), we can solve for the rover’s position offset \(\Delta \mathbf{X}_B\) and the clock bias term \(d_{AB}\). The system of linearized equations for all satellites is:
$$
\mathbf{G} \cdot \begin{bmatrix} \Delta \mathbf{X}_B \\ d_{AB} \end{bmatrix} = \mathbf{L}
$$
where \(\mathbf{G}\) is the \((n \times 4)\) geometry matrix containing the line-of-sight unit vectors and a column of ones for the clock term, and \(\mathbf{L}\) is the \((n \times 1)\) vector of observed minus computed pseudorange residuals. The least-squares solution is:
$$
\begin{bmatrix} \Delta \mathbf{X}_B \\ d_{AB} \end{bmatrix} = (\mathbf{G}^T \mathbf{G})^{-1} \mathbf{G}^T \mathbf{L}
$$
This process, executed in real-time on each rover drone’s flight computer, allows the fleet to maintain formation with the centimeter-level precision essential for a coherent drone light show lattice.

Flight Control System Hardware

The autonomous stability and navigation of each carrier drone are managed by its Flight Control Computer (FCC). A typical design for such a system is outlined below:

The core processor is often a high-performance microcontroller like an ARM Cortex-M series (e.g., STM32) or a dedicated flight controller SoC. It runs the core estimation and control algorithms. Essential sensor data is provided by an integrated Inertial Measurement Unit (IMU), typically containing a 3-axis MEMS gyroscope, a 3-axis accelerometer, and often a 3-axis magnetometer. A barometric pressure sensor provides altitude data. The critical position and velocity data come from the multi-frequency GNSS receiver module with support for RTK corrections. Communication with the ground station is handled by a robust long-range telemetry radio link (e.g., 900 MHz or 2.4 GHz). Finally, the FCC outputs Pulse-Width Modulation (PWM) or DShot signals to Electronic Speed Controllers (ESCs) which drive the brushless motors, and to servos if applicable.

The state estimation fuses data from the IMU (high rate, short-term accuracy) and the GNSS (lower rate, long-term absolute accuracy) using a Kalman Filter or complementary filter. This provides a robust estimate of the drone’s attitude, velocity, and position. The control law, often a cascaded PID or more advanced nonlinear controller, uses this state estimate and the desired formation position to compute the motor commands needed to stabilize and guide the drone.

Table 1: Key Components of the UAV Flight Control System
Component Model/Type Example Primary Function
Main Processor STM32F7 / Pixhawk-class Autopilot Runs navigation, control algorithms, and data fusion.
IMU MPU-6000 / ICM-20689 (Gyro + Accel) Measures angular rate and linear acceleration.
Magnetometer HMC5883L / RM3100 Provides heading reference (yaw).
Barometer MS5611 Measures atmospheric pressure for altitude.
GNSS Module u-blox ZED-F9P (RTK capable) Provides global position and velocity; receives RTCM corrections.
Telemetry Radio SiK / Holybro 500mW 915MHz Bidirectional data link with Ground Control Station.
Power Module Voltage/Current Sensor Monitors battery status and provides stable 5V/12V power.

Synthesis of the Volumetric LED Lattice

The core visual innovation is the creation of a 3D point matrix using linear elements carried by only four drones. Consider a target display volume discretized into an \(N_x \times N_y \times N_z\) lattice. In a traditional show, this would require \(N_x \cdot N_y \cdot N_z\) drones.

In the proposed system, four drones are positioned to form a rectangular prism in space. Each drone carries \(M\) rigid or semi-rigid booms, each embedded with a densely packed, addressable LED strip. The arrangement of these booms is such that their illuminated pixels, when viewed from a distance, interpolate the points within the volume. A simplified 2D analogy is using four drones to hold the four corners of a rectangular grid of lights, where the grid lines are the LED strips.

For an \(8 \times 8 \times 8\) lattice (512 points), the system might use four drones, each carrying three orthogonal LED strips (along the X, Y, and Z virtual axes relative to the formation). By carefully mapping the 3D coordinate of each desired light point to a specific drone and a specific pixel index on one of its strips, the full 512-point lattice can be addressed. The spatial resolution is determined by the pixel density on the strips and the physical size of the formation.

The mapping function \( \mathcal{M} \) from a global lattice point \( P_{global}(i,j,k) \) to a drone ID \( d \) and a strip pixel address \( a \) is critical:
$$ (d, a) = \mathcal{M}(i, j, k) $$
This function is precomputed based on the known geometry of the drone formation and the physical layout of strips on each drone. The animation engine on the GCS calculates the color \( C(i,j,k,t) \) for every lattice point at frame time \( t \), then uses \( \mathcal{M} \) to pack this data into command packets destined for the appropriate drone’s FPGA.

FPGA-Based LED Strip Control Hardware

To handle the high-speed, precise timing requirements of long addressable LED strips (like WS2812B, which requires a data rate of ~800 kHz per strip with strict waveform timing), a Field-Programmable Gate Array (FPGA) is an ideal choice for the LED controller. A mid-range FPGA, such as an Intel (Altera) Cyclone IV EP4CE6 or a Lattice iCE40UP, provides sufficient logic elements and I/O pins.

The FPGA design incorporates several key modules:

  1. Communication Interface: A UART or SPI slave module to receive command packets from the drone’s flight controller.
  2. Frame Buffer Memory: A block RAM (BRAM) configured to store the color data (typically 24-bit RGB) for all LED pixels assigned to this specific drone. Double-buffering is often used to prevent visual tearing.
  3. Pixel Timing Engine: A hard-coded state machine that reads pixel data from the frame buffer and generates the precise, non-return-to-zero (NRZ) waveform mandated by the LED protocol. This is where the FPGA excels, as it can generate multiple, perfectly synchronized data streams with sub-microsecond timing accuracy.
  4. Output Drivers: The FPGA’s GPIO pins directly drive the data lines of the LED strips. For very long strips or high inrush current, a simple level shifter (e.g., 74HCT245) may be added for voltage translation and current buffering.
Table 2: Resource Allocation Example for an FPGA LED Controller (EP4CE6)
Module Logic Elements Memory Bits (BRAM) I/O Pins Description
UART Interface ~120 0 2 (RX, TX) Receives serial commands from flight controller.
Frame Buffer (512 px) ~50 (Control) 512 * 24 = 12,288 0 Stores RGB data for all local pixels. Dual-port RAM allows simultaneous write/read.
Pixel Timing Engine (x4 strips) ~400 0 4 (Data Output) Four parallel state machines generating WS2812B waveforms.
Clock Management (PLL) 1 PLL 0 0 Generates stable clock for logic and precise baud rate.
Total Estimate ~570 ~12.3 Kb < 10 Well within capacity of a small FPGA.

Software Design and Control Flow

The software ecosystem spans the Ground Control Station, the drone’s flight controller firmware, and the logic on the FPGA.

1. Ground Control Station (GCS) Software:

  • Animation Authoring: Tools to design 3D sequences, defining the color and state of each voxel in the lattice over time.
  • Voxel-to-Pixel Mapping: Implements the \( \mathcal{M}(i, j, k) \) function to compile the animation into per-drone command lists.
  • Formation Management: Interfaces with the drone fleet via MAVLink protocol, sending waypoints for the formation shape and initiating/stopping the LED sequence transmission.
  • Real-time Telemetry: Monitors drone health, position, and battery status during the drone light show.

2. Drone Flight Controller Firmware:
The main control loop on the flight controller is augmented to handle LED commands.
$$
\begin{aligned}
&\text{Loop:} \\
&\quad \text{1. Read Sensors (IMU, GNSS, etc.)} \\
&\quad \text{2. Run State Estimation (Kalman Filter)} \\
&\quad \text{3. **Check for new LED frame data from GCS**} \\
&\quad \text{4. **Forward LED data to FPGA via UART/SPI**} \\
&\quad \text{5. Calculate Control Error: } \mathbf{e}(t) = \mathbf{X}_{desired}(t) – \mathbf{X}_{estimated}(t) \\
&\quad \text{6. Compute Control Output: } \mathbf{u}(t) = \mathbf{K}_P \mathbf{e}(t) + \mathbf{K}_I \int \mathbf{e}(t)dt + \mathbf{K}_D \frac{d\mathbf{e}(t)}{dt} \\
&\quad \text{7. Send motor commands (PWM/DShot)} \\
&\quad \text{8. Repeat.}
\end{aligned}
$$
The key addition is steps 3 and 4, where the flight controller acts as a reliable data bridge, ensuring LED commands are passed to the FPGA without interfering with the critical flight control timing.

3. FPGA Firmware (HDL Code):
The FPGA operates on a simpler, hardware-timed loop. Its primary task is to move data from its input buffer to its output shift registers at the exact moment required. A finite state machine (FSM) controls this process, ensuring that a new frame of LED data is only latched to the output during the reset period of the LED strips (a low signal for >50µs), preventing visual corruption.

System Integration and Performance Considerations

Integrating the two-layer system requires careful attention to synchronization, communication latency, and power management.

Synchronization: Absolute time synchronization between the drones and the GCS is crucial. This is typically achieved using the GPS PPS (Pulse Per Second) signal, allowing all drones to start each animation frame in unison. The FPGA’s timing is derived from its own crystal oscillator, which is stable enough to maintain precise pixel timing over the duration of a frame (usually < 50ms).

Communication Latency: The data pipeline from GCS -> Drone Radio -> Flight Controller -> FPGA must have predictable and low latency. To mitigate this, animation frames are typically streamed with a lead time or pre-loaded onto the drone’s storage if the sequence is short. The telemetry link must have sufficient bandwidth. Assuming 24-bit color per pixel and a 30 Hz refresh rate for a 512-voxel lattice, the required data rate is:
$$ \text{Data Rate} = 512 \frac{\text{pixels}}{\text{frame}} \times 24 \frac{\text{bits}}{\text{pixel}} \times 30 \frac{\text{frames}}{\text{sec}} = 368,640 \text{ bps} \approx 360 \text{ kbps} $$
This is well within the capability of modern digital telemetry links (which often operate at 1 Mbps+).

Power Management: Driving hundreds of high-brightness LEDs represents a significant power draw. Each LED can draw up to 60mA at full white brightness. For a drone controlling 128 LEDs, the peak current could be ~7.7A just for the lights. Therefore, drones require high-capacity batteries, and the LED control system must include intelligent power management, such as global brightness scaling or content-based power limiting, to ensure safe flight times.

Table 3: Key Performance Parameters and Design Targets
Parameter Target Specification Notes
Formation Positioning Accuracy < 10 cm (3D RMS) Enabled by RTK/PPK GPS.
Formation Update Rate > 10 Hz Ensures smooth formation movement.
LED Frame Refresh Rate 30 – 60 Hz Provides flicker-free animation.
End-to-End Command Latency < 100 ms From GCS command to LED change.
Number of Carrier Drones 4 (minimal configuration) Defines a volumetric space.
Effective Lattice Points 512 – 4096+ Scalable with strip density and formation size.
Operational Range 500 m – 1 km Dependent on radio link and visual range.
Flight Time 15 – 25 minutes Dependent on battery and LED load.

Conclusion and Future Work

The proposed control system architecture presents a paradigm shift for drone light shows, fundamentally addressing the core issue of cost and scalability. By decoupling the light-emitting elements from the aerial platforms and employing a minimal drone formation to define a scalable volumetric canvas, the system dramatically reduces the number of complex, expensive UAVs required for a large-scale display. The integration of high-precision differential GPS for formation holding, coupled with a robust two-layer control scheme and FPGA-based high-fidelity LED driving, results in a stable and reliable platform for synthesizing complex 3D aerial animations.

This approach offers significant advantages: reduced capital and operational costs, increased reliability (fewer points of failure), simplified logistics, and enhanced scalability (resolution can be increased by using longer, denser LED strips without adding drones). Future work will focus on optimizing the formation flight algorithms for dynamic lattice morphing, developing more efficient inter-drone wireless communication for distributed control, and exploring advanced pixel mapping techniques for non-rectilinear formations. Furthermore, integrating obstacle detection and avoidance systems will be crucial for safe operation in complex environments. This novel system paves the way for more accessible, versatile, and creatively ambitious drone light shows.

Scroll to Top