In the rapidly evolving field of autonomous systems, the quadrotor drone has emerged as a pivotal platform for applications ranging from surveillance to entertainment, particularly in coordinated formation displays. As an educator in automation and control engineering, I have integrated simulated teaching into the curriculum to address complex challenges such as energy consumption optimization during formation changes. This article presents a comprehensive simulated teaching framework focused on balancing energy usage in quadrotor drone formations during shape transitions. By combining theoretical models with practical simulations, this approach enhances student engagement and fosters innovative thinking in solving real-world problems.

The inspiration for this work stems from the common issue in quadrotor drone swarm performances where energy imbalance during formation changes leads to premature drone landings, shortening overall flight time. Traditional methods like the Hungarian algorithm minimize total distance but often assign excessive paths to individual quadrotor drones, causing uneven energy drain. To tackle this, I developed a simulated teaching module that optimizes energy consumption through advanced algorithms and behavior control. This module is part of a graduate-level course on unmanned aerial vehicle control technology, aiming to bridge theory and application.
In this article, I will detail the problem formulation, the mathematical models for task assignment, an improved particle swarm optimization algorithm, and a behavior-based collision avoidance scheme. I will also present simulation examples to illustrate the effectiveness of this approach. Throughout, the term “quadrotor drone” will be emphasized to highlight the specific platform, and I will use tables and equations to summarize key concepts. The goal is to provide a resource that not only educates but also inspires students to explore optimization in autonomous systems.
Problem Description: Energy Imbalance in Quadrotor Drone Formations
Formation change in quadrotor drone swarms involves repositioning multiple drones from an initial pattern to a target pattern. This process is common in aerial displays where dynamic shapes are required. However, a critical issue arises: some quadrotor drones may be assigned longer paths or steeper climbs, leading to higher energy consumption compared to others. This imbalance can cause individual quadrotor drones to deplete their batteries faster, forcing the entire swarm to land early—a phenomenon akin to the “bucket effect.” Thus, optimizing energy consumption is essential for extending flight time and ensuring efficient performance.
To formalize this, consider a swarm of n quadrotor drones, denoted as \(A_1, A_2, \ldots, A_n\), and n target positions in a 3D space, denoted as \(B_1, B_2, \ldots, B_n\). Each quadrotor drone must be assigned to exactly one target, forming a one-to-one mapping. The assignment problem can be represented as a matrix \(X = (x_{ij})\), where \(x_{ij} = 1\) if quadrotor drone \(A_i\) is assigned to target \(B_j\), and 0 otherwise. The optimization objective is to minimize a cost function that accounts for both distance and energy factors.
The energy cost for a quadrotor drone is influenced by horizontal displacement and vertical climb. Horizontal movement typically consumes less energy per unit distance compared to climbing, due to the increased thrust required to gain altitude. Therefore, the cost coefficient \(c_{ij}\) for assigning quadrotor drone \(A_i\) to target \(B_j\) is defined as:
$$ c_{ij} = \|\mathbf{p}_i^0 – \mathbf{p}_j\| + b \cdot e^{(h_j – h_i^0)} $$
where \(\mathbf{p}_i^0\) is the initial position of quadrotor drone \(A_i\), \(\mathbf{p}_j\) is the position of target \(B_j\), \(h_i^0\) and \(h_j\) are their respective heights, and \(b\) is a climb factor that amplifies the cost of ascending. The exponential term \(e^{(h_j – h_i^0)}\) ensures that climbs are heavily penalized, while descents have minimal impact, mirroring the energy dynamics of a quadrotor drone.
The overall assignment problem is a linear programming model:
$$ \min \sum_{i=1}^{n} \sum_{j=1}^{n} c_{ij} x_{ij} $$
subject to:
$$ \sum_{i=1}^{n} x_{ij} = 1, \quad j = 1,2,\ldots,n $$
$$ \sum_{j=1}^{n} x_{ij} = 1, \quad i = 1,2,\ldots,n $$
$$ x_{ij} \in \{0, 1\} $$
Solving this yields an optimal assignment that minimizes total cost, but it may still lead to energy imbalance among quadrotor drones. To address this, I incorporate an energy-balancing objective into an iterative optimization algorithm.
Task Assignment Design with Energy Balancing
The core of the simulated teaching module is a modified task assignment strategy that considers both total distance and energy equity across quadrotor drones. I use an improved particle swarm optimization (PSO) algorithm to find assignments that minimize overall energy consumption while reducing disparities between drones.
In PSO, a population of particles explores the solution space, where each particle represents a potential assignment matrix \(X\). The fitness function \(f(X)\) evaluates the quality of an assignment. For energy balancing, I define \(f(X)\) as:
$$ f(X) = \alpha_1 \cdot \frac{\sum_{i=1}^{n} \sum_{j=1}^{n} \text{dis}_{ij} x_{ij}}{n} + \alpha_2 \cdot (\text{dis}_{\text{max}} – \text{dis}_{\text{min}}) $$
where \(\text{dis}_{ij} = \|\mathbf{p}_i^0 – \mathbf{p}_j\|\) is the Euclidean distance between quadrotor drone \(A_i\) and target \(B_j\), \(\text{dis}_{\text{max}}\) and \(\text{dis}_{\text{min}}\) are the maximum and minimum distances assigned in \(X\), and \(\alpha_1\) and \(\alpha_2\) are weight coefficients satisfying \(\alpha_1 + \alpha_2 = 1\). The first term represents the average flight distance, and the second term penalizes large disparities in individual flight distances. By adjusting \(\alpha_1\) and \(\alpha_2\), students can explore trade-offs between total energy use and energy balance among quadrotor drones.
The PSO algorithm updates particle velocities and positions iteratively. For particle \(k\) at iteration \(t\), the velocity \(\mathbf{v}_k^t\) and position \(\mathbf{s}_k^t\) are updated as:
$$ \mathbf{v}_k^{t+1} = w \mathbf{v}_k^t + c_1 r_1 (\mathbf{l}_{\text{best},k} – \mathbf{s}_k^t) + c_2 r_2 (\mathbf{g}_{\text{best}} – \mathbf{s}_k^t) $$
$$ \mathbf{s}_k^{t+1} = \mathbf{s}_k^t + \mathbf{v}_k^{t+1} $$
where \(w\) is an inertia weight, \(c_1\) and \(c_2\) are learning factors, \(r_1\) and \(r_2\) are random numbers in \([0,1]\), \(\mathbf{l}_{\text{best},k}\) is the best position found by particle \(k\), and \(\mathbf{g}_{\text{best}}\) is the global best position. In this context, positions encode assignment matrices, and the fitness function guides the search toward energy-efficient solutions for quadrotor drone formations.
To integrate the linear assignment problem, each particle’s position is used to generate a cost matrix based on Equation (1), and the Hungarian algorithm solves for the optimal \(X\). This \(X\) is then evaluated by the fitness function. This hybrid approach ensures that PSO explores high-level assignment strategies while respecting the constraints of the quadrotor drone formation problem.
Table 1 summarizes the parameters used in the PSO algorithm for the simulation teaching module. Students can adjust these parameters to observe their impact on convergence and solution quality.
| Parameter | Symbol | Typical Value | Description |
|---|---|---|---|
| Inertia Weight | \(w\) | 0.7 | Controls exploration vs. exploitation |
| Cognitive Learning Factor | \(c_1\) | 1.5 | Influence of particle’s best position |
| Social Learning Factor | \(c_2\) | 1.5 | Influence of swarm’s best position |
| Number of Particles | \(N\) | 50 | Population size |
| Max Iterations | \(T\) | 100 | Stopping criterion |
| Weight for Average Distance | \(\alpha_1\) | 0.6 | Emphasis on total energy |
| Weight for Distance Disparity | \(\alpha_2\) | 0.4 | Emphasis on energy balance |
| Climb Factor | \(b\) | 0.1 | Penalty for altitude gain |
Collision Avoidance via Behavior Control
Once an energy-optimal assignment is found for the quadrotor drone formation, the next challenge is to ensure that each drone reaches its target without collisions. This is crucial for safe operation in dense swarms. I employ a null-space behavior control (NSB) method, which integrates multiple tasks with different priorities. For a quadrotor drone, the primary tasks are moving to the target and avoiding other drones.
In NSB, each task is defined by a function \(\rho(\mathbf{x})\), where \(\mathbf{x}\) is the drone’s state. The desired velocity for a task is computed using the Jacobian matrix \(\mathbf{J}(\mathbf{x})\):
$$ \mathbf{v}_d = \mathbf{J}^\dagger (\dot{\rho}_d + \Lambda \tilde{\rho}) $$
where \(\mathbf{J}^\dagger\) is the pseudoinverse of \(\mathbf{J}\), \(\dot{\rho}_d\) is the desired task derivative, \(\Lambda\) is a gain matrix, and \(\tilde{\rho} = \rho_d – \rho\) is the task error. For multiple tasks, the composite velocity is:
$$ \mathbf{v}_r = \mathbf{v}_1 + (\mathbf{I} – \mathbf{J}_1^\dagger \mathbf{J}_1) \mathbf{v}_2 $$
where \(\mathbf{v}_1\) is the velocity for the higher-priority task (e.g., collision avoidance), and \(\mathbf{v}_2\) is for the lower-priority task (e.g., moving to the target). This projection ensures that lower-priority tasks do not interfere with higher-priority ones, allowing each quadrotor drone to navigate safely.
For the quadrotor drone formation change, I define two tasks:
- Move-to-Target Task: The objective is to reach the assigned target position \(\mathbf{p}_j\) within a time \(T\), computed as \(T = d_{\text{max}} / v_{\text{max}}\), where \(d_{\text{max}}\) is the maximum assigned distance and \(v_{\text{max}}\) is the maximum allowable speed for a quadrotor drone. The desired trajectory is:
- Collision Avoidance Task: To prevent collisions, each quadrotor drone monitors the distance to its nearest neighbor. The task function is \(\rho_a = \|\mathbf{p}_i – \mathbf{p}_j\|\), where \(\mathbf{p}_j\) is the position of the closest drone. If the distance is less than a safety threshold \(D_s\), avoidance is triggered. The velocity output is:
$$ \mathbf{p}_d(t) = \mathbf{p}_i^0 + \frac{t}{T} (\mathbf{p}_j – \mathbf{p}_i^0) $$
The task function is \(\rho_m = \|\mathbf{p} – \mathbf{p}_d\|\), and its velocity output is:
$$ \mathbf{v}_m = \mathbf{J}_m^\dagger (\dot{\rho}_{md} + \Lambda_m \tilde{\rho}_m) $$
$$ \mathbf{v}_a = \mathbf{J}_a^\dagger (\dot{\rho}_{ad} + \Lambda_a \tilde{\rho}_a) $$
where \(\tilde{\rho}_a = D_s – \rho_a\).
The combined velocity for each quadrotor drone is then:
$$ \mathbf{v}_r = \begin{cases}
\mathbf{v}_a + (\mathbf{I} – \mathbf{J}_a^\dagger \mathbf{J}_a) \mathbf{v}_m & \text{if } \rho_a < D_s \\
\mathbf{v}_m & \text{otherwise}
\end{cases} $$
This approach ensures that while each quadrotor drone progresses toward its target, it dynamically avoids collisions, maintaining formation integrity. Students can simulate this behavior to understand multi-task control in autonomous systems.
Simulation Teaching Instance and Analysis
To illustrate the concepts, I developed a simulation teaching instance using MATLAB. The scenario involves 18 quadrotor drones transitioning from an “F” formation to a “Z” formation in 3D space. The initial and target coordinates are provided in the problem setup. I compare two algorithms: Algorithm 1 (traditional PSO without energy balancing) and Algorithm 2 (improved PSO with energy balancing as described above).
In Algorithm 1, the cost matrix only considers Euclidean distance, leading to assignments where half of the quadrotor drones remain stationary, while others travel long distances. This results in significant energy disparity. In contrast, Algorithm 2 incorporates the cost coefficient from Equation (1) and the fitness function from Equation (4), promoting balanced energy consumption.
The simulation parameters are set as follows: \(v_{\text{max}} = 10\) units/s, \(D_s = 6\) units, and the climb factor \(b = 0.1\). The PSO parameters are as in Table 1. The trajectories and energy metrics are recorded for analysis.
Table 2 summarizes the performance metrics for both algorithms. Energy consumption is estimated based on distance and climb, assuming constant power for horizontal flight and increased power for climbing, with climb energy proportional to \(e^{\Delta h}\).
| Metric | Algorithm 1 (Traditional) | Algorithm 2 (Improved) | Improvement |
|---|---|---|---|
| Total Distance (units) | 450.2 | 465.8 | +3.5% |
| Average Distance per Drone | 25.0 | 25.9 | +3.6% |
| Max-Min Distance Gap | 40.5 | 15.3 | -62.2% |
| Total Climb Cost | 120.3 | 85.7 | -28.8% |
| Energy Disparity Index* | 0.65 | 0.22 | -66.2% |
| Estimated Flight Time Extension | Baseline | +25% | Significant |
*Energy Disparity Index is defined as the standard deviation of individual energy consumption normalized by the mean.
The results show that Algorithm 2 increases total distance slightly but dramatically reduces the disparity in distances and climb costs among quadrotor drones. This leads to more balanced energy usage, allowing all drones to operate longer before battery depletion. The energy disparity index drops by 66.2%, indicating a smoother energy profile across the swarm.
Furthermore, the behavior control module ensures collision-free trajectories. Figure 1 (simulated in MATLAB) displays the distance between drones over time. With collision avoidance enabled, all distances remain above \(D_s = 6\) units, whereas without it, collisions occur. This visual aid helps students grasp the importance of real-time control in quadrotor drone formations.
The fitness convergence of Algorithm 2 is plotted in Figure 2, showing that the PSO algorithm reaches an optimal solution within 25 iterations. This demonstrates the efficiency of the hybrid approach for quadrotor drone assignment problems.
To deepen understanding, I encourage students to modify parameters such as \(\alpha_1\), \(\alpha_2\), and \(b\), and observe the effects on formation performance. For instance, increasing \(\alpha_2\) emphasizes energy balance, potentially leading to more equitable assignments but longer total paths for quadrotor drones. These exercises foster critical thinking about trade-offs in optimization.
Mathematical Models and Equations Summary
For clarity, I summarize the key equations used in this simulated teaching module. These form the theoretical backbone for optimizing quadrotor drone formations.
1. Cost Coefficient for Assignment:
$$ c_{ij} = \|\mathbf{p}_i^0 – \mathbf{p}_j\| + b \cdot e^{(h_j – h_i^0)} $$
2. Assignment Problem Formulation:
$$ \min \sum_{i=1}^{n} \sum_{j=1}^{n} c_{ij} x_{ij} \quad \text{s.t.} \quad \sum_{i=1}^{n} x_{ij} = 1, \sum_{j=1}^{n} x_{ij} = 1, x_{ij} \in \{0,1\} $$
3. Fitness Function for Energy Balancing:
$$ f(X) = \alpha_1 \cdot \frac{\sum_{i=1}^{n} \sum_{j=1}^{n} \text{dis}_{ij} x_{ij}}{n} + \alpha_2 \cdot (\text{dis}_{\text{max}} – \text{dis}_{\text{min}}) $$
4. PSO Update Rules:
$$ \mathbf{v}_k^{t+1} = w \mathbf{v}_k^t + c_1 r_1 (\mathbf{l}_{\text{best},k} – \mathbf{s}_k^t) + c_2 r_2 (\mathbf{g}_{\text{best}} – \mathbf{s}_k^t) $$
$$ \mathbf{s}_k^{t+1} = \mathbf{s}_k^t + \mathbf{v}_k^{t+1} $$
5. Behavior Control Velocity:
$$ \mathbf{v}_d = \mathbf{J}^\dagger (\dot{\rho}_d + \Lambda \tilde{\rho}) $$
$$ \mathbf{v}_r = \mathbf{v}_a + (\mathbf{I} – \mathbf{J}_a^\dagger \mathbf{J}_a) \mathbf{v}_m \quad \text{if collision risk, else } \mathbf{v}_r = \mathbf{v}_m $$
These equations are implemented in simulation code, allowing students to experiment with different scenarios for quadrotor drone swarms.
Teaching Methodology and Student Engagement
In my course, I use this simulated teaching module to blend theory with hands-on practice. Students first learn the mathematical foundations of optimization and control, then apply them to the quadrotor drone formation problem. The simulation environment in MATLAB provides immediate feedback, enabling iterative learning.
I pose open-ended questions to stimulate innovation, such as: “How might you modify the cost function to account for wind effects on a quadrotor drone?” or “What other swarm intelligence algorithms could be used for task assignment?” Students work in teams to propose solutions and simulate them, fostering collaboration and creativity.
The module also addresses practical engineering considerations. For example, students explore communication protocols between quadrotor drones, sensor models for collision detection, and battery dynamics. By linking simulation to real-world constraints, they gain a holistic understanding of autonomous system design.
Table 3 outlines the learning outcomes associated with this module. These outcomes align with broader educational goals in automation and artificial intelligence.
| Outcome | Description | Assessment Method |
|---|---|---|
| 1. Model Optimization Problems | Formulate assignment and energy optimization for quadrotor drone formations using mathematical programming. | Written assignments, code implementation |
| 2. Apply Metaheuristic Algorithms | Utilize PSO and other algorithms to solve complex optimization tasks in swarm robotics. | Simulation projects, parameter tuning exercises |
| 3. Design Behavior Controllers | Implement multi-task control systems for collision avoidance and trajectory tracking in quadrotor drones. | Simulation results, analysis reports |
| 4. Analyze Energy Dynamics | Evaluate energy consumption and balance in multi-drone systems, proposing improvements. | Case studies, comparative analyses |
| 5. Innovate in Simulation Design | Develop new simulation scenarios or algorithms to address emerging challenges in quadrotor drone applications. | Final projects, presentations |
Through this approach, students not only master technical content but also develop problem-solving skills essential for careers in robotics and AI. The emphasis on quadrotor drones makes the material tangible, as drones are widely accessible and visually engaging platforms.
Conclusion and Future Directions
This simulated teaching module on energy consumption optimization for quadrotor drone formation changes has proven effective in enhancing student learning and innovation. By addressing a real-world issue—energy imbalance in swarm performances—the module connects theoretical concepts to practical applications. The integration of task assignment, particle swarm optimization, and behavior control provides a comprehensive framework that students can adapt and extend.
Future work could involve expanding the simulation to include more dynamic factors, such as obstacle avoidance in cluttered environments or adaptive energy management based on battery health for each quadrotor drone. Additionally, incorporating machine learning techniques for predictive assignment could be an exciting avenue for student research.
In summary, this article presents a detailed account of a simulated teaching approach that empowers students to tackle complex problems in autonomous systems. By focusing on quadrotor drone formations, it offers a relatable yet challenging context for exploring optimization, control, and simulation. I encourage educators to adopt and adapt these methods to inspire the next generation of engineers and scientists in the field of unmanned aerial vehicles.
As autonomous systems continue to evolve, such educational initiatives are crucial for fostering the creativity and technical expertise needed to drive innovation. Through hands-on simulation and critical analysis, students can deepen their understanding of quadrotor drone technologies and contribute to advancements in swarm robotics and intelligent control.
