Energy Consumption Optimization in Drone Formation Change: A Simulation Teaching Approach

In the context of modern autonomous systems, drone formations have emerged as a pivotal technology with applications ranging from aerial displays to surveillance and logistics. As an educator in automation and control engineering, I have integrated drone formation control into graduate-level courses, such as “Drone Control Technology,” to foster innovation and critical thinking. However, traditional teaching methods often fall short in engaging students with complex topics like formation transformation, where energy consumption optimization is a key challenge. To address this, I developed a simulation-based teaching module that focuses on balancing energy usage during drone formation changes, leveraging advanced algorithms like particle swarm optimization and behavior control. This article details my approach, combining theoretical foundations with practical simulations to enhance student learning and inspire creative problem-solving.

The core issue in drone formation transformation lies in the energy imbalance among drones during formation switching. In typical scenarios, such as aerial light shows, drones must reconfigure from one formation to another, often using assignment algorithms like the Hungarian method to minimize total travel distance. While effective for overall distance optimization, this approach can lead to uneven energy consumption, where some drones are assigned longer paths or higher altitude climbs, causing faster battery depletion and reducing overall flight time. This “bucket effect” compromises the performance of the entire drone formation. Thus, my teaching module emphasizes energy balance as a critical optimization criterion, moving beyond mere distance minimization to ensure sustainable and efficient drone operations.

To formalize the problem, consider a drone formation with \( n \) drones and \( n \) target positions in a 3D space. Each drone \( A_i \) (for \( i = 1, 2, \dots, n \)) must be assigned to a target point \( B_j \) (for \( j = 1, 2, \dots, n \)), forming a one-to-one mapping. This can be modeled as an assignment problem with a binary decision matrix \( \mathbf{X} = (x_{ij}) \), where \( x_{ij} = 1 \) if drone \( i \) is assigned to target \( j \), and 0 otherwise. The goal is to minimize an objective function that accounts for both travel distance and energy consumption. Traditional approaches use a cost matrix \( c_{ij} \) representing the Euclidean distance between drone \( i \)’s start position \( \mathbf{p}_i^0 \) and target \( j \)’s position \( \mathbf{p}_j \), but this ignores energy factors like altitude changes. In my teaching, I introduce a modified cost function:

$$ c_{ij} = \|\mathbf{p}_i^0 – \mathbf{p}_j\| + b \cdot e^{|\mathbf{p}_j^h – \mathbf{p}_i^{0,h}|} $$

where \( \mathbf{p}_i^{0,h} \) and \( \mathbf{p}_j^h \) are the heights of the start and target positions, respectively, and \( b \) is a climbing factor that penalizes altitude increases. This exponential term simulates the higher energy cost of climbing, encouraging assignments that favor descending paths or minimal height changes. The assignment problem is then formulated as a linear programming model:

$$ \min \sum_{i=1}^n \sum_{j=1}^n c_{ij} x_{ij} $$

subject to:

$$ \sum_{i=1}^n x_{ij} = 1, \quad j = 1, 2, \dots, n $$
$$ \sum_{j=1}^n x_{ij} = 1, \quad i = 1, 2, \dots, n $$
$$ x_{ij} \in \{0, 1\} $$

This model serves as the foundation for task allocation in drone formation changes. However, to address energy balance, I extend it using metaheuristic optimization. In class, I guide students through solving this with the Hungarian algorithm for baseline comparisons, then introduce particle swarm optimization (PSO) to incorporate energy-aware objectives. This hands-on approach helps students grasp the trade-offs between efficiency and fairness in drone formations.

Particle swarm optimization is a population-based algorithm inspired by social behavior, where particles (representing potential solutions) move through a search space to find optimal values. For our drone formation problem, each particle encodes a possible assignment matrix \( \mathbf{X} \), and its fitness is evaluated using an objective function that balances total travel distance and energy disparity among drones. The standard PSO updates particle velocities and positions as follows:

$$ \mathbf{v}_i^{k+1} = w \mathbf{v}_i^k + c_1 r_1 (\mathbf{l}_{\text{best},i} – \mathbf{s}_i^k) + c_2 r_2 (\mathbf{g}_{\text{best}} – \mathbf{s}_i^k) $$
$$ \mathbf{s}_i^{k+1} = \mathbf{s}_i^k + \mathbf{v}_i^{k+1} $$

where \( \mathbf{v}_i^k \) and \( \mathbf{s}_i^k \) are the velocity and position of particle \( i \) at iteration \( k \), \( w \) is an inertia weight, \( c_1 \) and \( c_2 \) are acceleration coefficients, \( r_1 \) and \( r_2 \) are random numbers, \( \mathbf{l}_{\text{best},i} \) is the particle’s personal best position, and \( \mathbf{g}_{\text{best}} \) is the global best position. In my teaching module, I modify the fitness function to prioritize energy balance. For a given assignment \( \mathbf{X} \), let \( \mathbf{d}_{ij} = \|\mathbf{p}_i^0 – \mathbf{p}_j\| \) be the distance matrix, and define \( d_{\text{max}} = \max_{i,j} \mathbf{d}_{ij} \) and \( d_{\text{min}} = \min_{i,j} \mathbf{d}_{ij} \) for the assigned pairs. The fitness function is:

$$ f(\mathbf{X}) = \alpha_1 \cdot \frac{\sum_{i=1}^n \sum_{j=1}^n \mathbf{d}_{ij} x_{ij}}{n} + \alpha_2 \cdot (d_{\text{max}} – d_{\text{min}}) $$

where \( \alpha_1 \) and \( \alpha_2 \) are weighting coefficients satisfying \( \alpha_1 + \alpha_2 = 1 \). The first term represents the average travel distance, and the second term penalizes large disparities in individual drone paths, promoting energy balance. By tuning \( \alpha_1 \) and \( \alpha_2 \), students can explore different optimization strategies: higher \( \alpha_1 \) minimizes overall distance, while higher \( \alpha_2 \) reduces energy inequality. This iterative process in PSO allows for dynamic adjustment, converging to an assignment that optimizes both criteria for the drone formation.

To ensure practical applicability, the simulation teaching also incorporates collision avoidance, a critical aspect of real-world drone operations. After obtaining an optimal assignment via PSO, each drone must navigate to its target without colliding with others. I introduce behavior control techniques, specifically null-space behavior control, which manages multiple tasks with varying priorities. For each drone \( i \), two primary tasks are defined: moving to the target point (task \( \rho_m \)) and avoiding collisions (task \( \rho_a \)). The movement task is derived from a reference trajectory \( \mathbf{p}_i^d(t) \), calculated based on the maximum flight time \( T \) to ensure synchronized arrival:

$$ T = \frac{d_{\text{max}}}{v_{\text{max}}} $$
$$ \mathbf{p}_i^d(t) = \mathbf{p}_i^0 + \frac{\mathbf{p}_j – \mathbf{p}_i^0}{T} \cdot t $$

where \( v_{\text{max}} \) is the maximum allowable drone speed. The velocity output for the movement task is computed using the task Jacobian \( \mathbf{J}_m \):

$$ \mathbf{v}_{m,i} = \mathbf{J}_m^\dagger (\dot{\rho}_{m,i}^d + \Lambda_m \tilde{\rho}_{m,i}) $$

where \( \tilde{\rho}_{m,i} = \rho_{m,i}^d – \rho_{m,i} \) is the position error, \( \Lambda_m \) is a gain constant, and \( \dagger \) denotes the pseudoinverse. For collision avoidance, the task function \( \rho_a = \|\mathbf{p}_i – \mathbf{p}_j\| \) measures the distance to the nearest drone, with a safety threshold \( D_s \). The avoidance velocity is:

$$ \mathbf{v}_{a,i} = \mathbf{J}_a^\dagger (\dot{\rho}_{a,i}^d + \Lambda_a \tilde{\rho}_{a,i}) $$

where \( \tilde{\rho}_{a,i} = D_s – \rho_{a,i} \). Using null-space projection, the combined velocity command ensures that lower-priority tasks (e.g., movement) are projected onto the null space of higher-priority tasks (e.g., avoidance) when conflicts arise:

$$ \mathbf{v}_{r,i} = \mathbf{v}_{a,i} + (\mathbf{I} – \mathbf{J}_a^\dagger \mathbf{J}_a) \mathbf{v}_{m,i} $$

This framework guarantees safe navigation during drone formation changes, and I demonstrate its effectiveness through simulations in MATLAB, allowing students to visualize and analyze drone trajectories.

For the simulation teaching instance, I designed a case study with 18 drones transitioning between two formations: an initial “F” shape and a target “Z” shape. The coordinates are set as follows, with start positions \( (x_1, y_1, z_1) \) and target positions \( (x_2, y_2, z_2) \):

Drone Index Start \( x_1 \) Start \( y_1 \) Start \( z_1 \) Target \( x_2 \) Target \( y_2 \) Target \( z_2 \)
1 50 0 10 50 0 10
2 40 0 10 40 0 10
3 30 0 10 30 0 10
4 40 9 19 20 0 10
5 40 18 28 10 0 10
6 40 27 37 0 0 10
7 30 25 35 0 3.5 13.5
8 20 25 35 40 9 19
9 10 25 35 30 18 28
10 10 21 31 20 27 37
11 10 28 38 10 37 47
12 40 37 47 0 46 56
13 40 46 56 10 46 56
14 45 46 56 20 46 56
15 30 46 56 30 46 56
16 20 46 56 40 46 56
17 10 46 56 50 46 56
18 0 46 56 60 52 52

In the simulation, I compare two algorithms: Algorithm 1 (traditional PSO with distance-only optimization) and Algorithm 2 (improved PSO with energy balance). The parameters are set as \( b = 0.1 \), \( \alpha_1 = 0.7 \), \( \alpha_2 = 0.3 \), \( w = 0.8 \), \( c_1 = c_2 = 1.5 \), and population size of 50 over 100 iterations. For collision avoidance, the safety distance \( D_s = 6 \) units, and gains \( \Lambda_m = \Lambda_a = 1 \). The results are summarized in the table below, highlighting key metrics for drone formation performance.

Metric Algorithm 1 (Traditional PSO) Algorithm 2 (Improved PSO)
Total Travel Distance 450 units 480 units
Maximum Individual Distance 60 units 45 units
Minimum Individual Distance 0 units 15 units
Energy Disparity (\( d_{\text{max}} – d_{\text{min}} \)) 60 units 30 units
Average Fitness Value 55 40
Collision Incidents 3 (without avoidance) 0 (with avoidance)

Algorithm 1 leads to half of the drones remaining stationary, causing high energy imbalance, as shown by the large disparity in individual distances. In contrast, Algorithm 2 achieves better energy balance by slightly shifting the entire drone formation and prioritizing assignments with descending paths, reducing the maximum distance and ensuring more uniform energy consumption. The fitness convergence plot for Algorithm 2 demonstrates optimization progress, with the best fitness value plateauing around 25 iterations, indicating effective search behavior.

The collision avoidance analysis further validates the approach. Without behavior control, drone distances drop below the safety threshold, risking mid-air collisions. With null-space behavior control, all drones maintain distances above \( D_s \), ensuring safe navigation during the formation change. Students can visualize these outcomes through MATLAB plots, such as trajectory projections and distance-time graphs, reinforcing theoretical concepts with practical insights.

In teaching, this simulation module has proven invaluable for engaging graduate students in “Drone Control Technology.” By walking through problem formulation, algorithm design, and simulation implementation, students gain hands-on experience with optimization techniques and behavior control. The emphasis on energy balance in drone formations encourages them to think beyond textbook solutions, considering real-world constraints like battery life and safety. Interactive discussions on sensor types, communication protocols, and localization methods further deepen their understanding, fostering innovation in autonomous systems design.

In conclusion, drone formation transformation with energy consumption optimization presents a rich educational opportunity. My simulation teaching approach, integrating particle swarm optimization and behavior control, not only addresses technical challenges but also cultivates critical thinking and creativity. By balancing theoretical rigor with practical simulations, students learn to design efficient and sustainable drone formations, preparing them for advancements in AI-driven autonomous systems. Future work may expand to multi-objective optimization, dynamic environments, and hardware-in-the-loop experiments, further enhancing the learning experience for next-generation engineers.

Scroll to Top