In recent years, the advancement of computer technology and wireless communication has enabled the collaborative operation of multiple autonomous agents, such as drones, making it possible to accomplish tasks that are challenging for a single agent. This has led to the emergence of formation control for multi-agent systems, particularly in the context of drone formation. As a researcher in this field, I have been exploring distributed control methods to achieve efficient and stable drone formation. In this article, I will share my insights and findings on a distributed control approach for multiple drones, focusing on simulation-based validation. The drone formation control problem involves coordinating a group of drones to maintain a desired geometric pattern while moving, which is crucial for applications like surveillance, search and rescue, and environmental monitoring. The distributed nature of the control allows each drone to operate based on local information, enhancing scalability and robustness.
The core of my work revolves around transforming the dynamic equations of drones into an error model that captures the deviation between actual and desired relative motions. This error model serves as the foundation for designing controllers that ensure stable drone formation. I employ a feedforward-feedback control strategy, where the feedforward component compensates for external signals from a leader drone, and the feedback component stabilizes the system. Additionally, I incorporate an internal model to handle unknown external signals and a state observer to estimate unmeasured states. Throughout this article, I will use tables and equations to summarize key concepts, and I will emphasize the term “drone formation” to highlight its importance. Let me begin by detailing the system model for drone formation control.
System Modeling for Drone Formation
Consider a system of \( n \) drones, denoted as \( \Sigma_i \) for \( i = 0, 1, \dots, n-1 \). Each drone’s dynamics can be described by the following equations, which represent its position, velocity, and heading angle:
$$
\dot{r}_{xi} = v_i \cos \theta_i, \quad \dot{r}_{yi} = v_i \sin \theta_i, \quad i = 0, \dots, n-1
$$
where \( (r_{xi}, r_{yi}) \), \( v_i \), and \( \theta_i \) are the position, velocity, and heading angle of the \( i \)-th drone, respectively. For control purposes, I assume that the velocity and heading angle can be regulated by autopilot control laws:
$$
\dot{v}_i = -\lambda_v (v_i – v_{ci}), \quad \dot{\theta}_i = -\lambda_\theta (\theta_i – \theta_{ci})
$$
with \( \lambda_v > 0 \) and \( \lambda_\theta > 0 \) as control gains, and \( v_{ci} \) and \( \theta_{ci} \) as command signals. In a drone formation, I designate one drone, say \( \Sigma_0 \), as the leader, and the remaining \( n-1 \) drones as followers. The goal is to control the followers to maintain a desired relative position with respect to the leader, ensuring a cohesive drone formation.
To model the relative motion between a follower drone \( \Sigma_i \) and the leader \( \Sigma_0 \), I define the distance \( \rho_i \) and angle \( \psi_i \) as follows:
$$
\rho_i = \sqrt{(r_{xi} – r_{x0})^2 + (r_{yi} – r_{y0})^2}, \quad \psi_i = \theta_i – \theta_0
$$
Here, \( \psi_i \) represents the angle between the velocity vector of the leader and the line connecting the leader and follower. The relative motion equations can be derived as:
$$
\dot{\rho}_i = v_i \cos(\psi_i + \phi_i) – v_0 \cos(\psi_i)
$$
$$
\dot{\psi}_i = \frac{1}{\rho_i} v_i \sin(\psi_i + \phi_i) – \dot{\theta}_0 + \frac{1}{\rho_i} v_i \sin(\psi_i)
$$
$$
\dot{v}_i = -\lambda_v (v_i – v_{ci}), \quad \dot{\phi}_i = -\lambda_\theta \phi_i + \lambda_\theta \theta_{ci} – \lambda_\theta \theta_{c0}
$$
where \( \phi_i = \theta_i – \theta_0 \) is the heading difference. The output for each follower is \( y_i = (\rho_i, \psi_i) \), which should converge to a desired value \( y_{di} = (\rho_{di}, \psi_{di}) \) for proper drone formation. This relative motion model forms the basis for error analysis and controller design.
In a distributed drone formation control setup, the leader drone can be treated as an external autonomous system generating signals that affect the followers. I assume the leader’s command signals, \( v_{c0} \) and \( \theta_{c0} \), are piecewise linear functions of time over finite intervals. Let \( w = [v_0, \theta_0, v_{c0}, \dot{v}_{c0}, \theta_{c0}, \dot{\theta}_{c0}]^T \), then the leader dynamics can be described as:
$$
\dot{w} = S w, \quad [v_0, \theta_0, v_{c0}, \theta_{c0}]^T = Q w
$$
where \( S \) and \( Q \) are matrices defined based on the system parameters. This external system model is crucial for designing compensators in the control law. The combined system for a follower drone, incorporating the external leader signals, is:
$$
\dot{w} = S w, \quad \dot{X}_i = f(X_i) + G(X_i) u_i + P(X_i) Q w, \quad y_i = h(X_i)
$$
with state \( X_i = (\rho_i, \psi_i, v_i, \phi_i) \), control input \( u_i = [v_{ci}, \theta_{ci}]^T \), and output \( y_i = h(X_i) = [\rho_i, \psi_i]^T \). The functions \( f \), \( G \), and \( P \) are derived from the relative motion equations. This formulation allows me to design controllers that account for external disturbances from the leader, essential for robust drone formation control.
Controller Design for Drone Formation
The control objective is to achieve asymptotic convergence of the output \( y_i \) to the desired value \( y_{di} \), ensuring stable drone formation. I propose a distributed control method that combines feedforward and feedback components. The feedforward controller compensates for the external leader signals, while the feedback controller stabilizes the error dynamics. This approach leverages internal model principles and state observers to handle uncertainties and unmeasured states.
Feedforward Controller with Internal Model
To design the feedforward controller, I first transform the system dynamics into an error model. Define the error as \( e = y – y_d \), where \( y_d \) is the desired output for drone formation. Through a state transformation, I rewrite the system in a form that highlights the error dynamics. The goal is to make the equilibrium point of the error model at zero. The feedforward control law \( u_f \) is designed to cancel the effect of the external signals \( w \). However, since \( w \) is not directly measurable, I embed an internal model into the controller.
The internal model is based on the idea that the steady-state feedforward control \( c(w, y_d) \) can be approximated by a polynomial function of \( w \) over time intervals. For instance, in a given interval \( [t_k, t_{k+1}) \), I assume that \( c(w(t), y_d) \) satisfies \( \frac{d^2}{dt^2} c(w(t), y_d) = 0 \). This leads to a linear observable system that captures the dynamics of \( c(w, y_d) \). Specifically, I define mappings \( \tau_i(w, y_d) \) for each component of \( c \), such that:
$$
\dot{\tau}_i(w(t), y_d) = \Omega_i \tau_i(w(t), y_d), \quad c_i(w(t), y_d) = \Gamma_i \tau_i(w(t), y_d)
$$
where \( \Omega_i \) and \( \Gamma_i \) are matrices chosen to represent the polynomial approximation. For example, \( \Omega_i = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \) and \( \Gamma_i = \begin{bmatrix} 1 & 0 \end{bmatrix} \) for a first-order approximation. This implies that the external system can be immersed into a linear system, enabling the design of an internal model.
The internal model for the feedforward controller is given by:
$$
\dot{\xi} = F \xi + G \tilde{u}, \quad u_f = \Psi \xi
$$
where \( \xi \) is the internal state, \( F \) is a Hurwitz matrix, \( G \) is a control gain matrix, and \( \Psi \) is an output matrix designed to match the dynamics of \( c(w, y_d) \). The matrices \( F \), \( G \), and \( \Psi \) are selected based on the immersion conditions. For instance, if \( (F_i, G_i) \) is controllable and \( \Psi_i \) is chosen such that \( F_i + G_i \Psi_i \) has the same spectrum as \( \Omega_i \), then the internal model can replicate the external signal behavior. This internal model ensures that the feedforward controller effectively compensates for leader signals, contributing to accurate drone formation.
To illustrate the internal model design, consider the following table summarizing key parameters for a typical drone formation control scenario:
| Parameter | Symbol | Value | Description |
|---|---|---|---|
| Internal model matrix | \( F_i \) | \( \begin{bmatrix} 0 & 1 \\ -1 & -0.8 \end{bmatrix} \) | Hurwitz matrix for stability |
| Control gain matrix | \( G_i \) | \( \begin{bmatrix} 0 \\ 1 \end{bmatrix} \) | Ensures controllability |
| Output matrix | \( \Psi_i \) | \( \begin{bmatrix} 1 & 0.8 \end{bmatrix} \) | Matches external signal dynamics |
| Approximation order | – | First-order polynomial | For piecewise linear signals |
This table highlights the design choices for embedding the internal model into the feedforward controller. By using such parameters, the controller can adapt to varying leader signals, ensuring consistent performance in drone formation tasks.
Feedback Controller Using Backstepping
The feedback controller is designed to stabilize the error dynamics around zero. I employ the backstepping technique, which is a recursive method for designing stabilizing controllers for nonlinear systems. The error dynamics after incorporating the feedforward controller can be expressed as:
$$
\dot{\tilde{z}} = A_c \tilde{z} + B_c [\tilde{\Phi}(\tilde{z}, w, y_d) + \Psi \xi – \Psi \tilde{\tau}(w, y_d)] + B_c u_{fb} + \Delta(t)
$$
where \( \tilde{z} \) is the transformed error state, \( A_c \) and \( B_c \) are matrices in controllable canonical form, \( \tilde{\Phi} \) is a nonlinear function, \( \xi \) is the internal model state, \( \tilde{\tau} \) represents the external signal estimate, \( u_{fb} \) is the feedback control input, and \( \Delta(t) \) is a bounded perturbation accounting for approximation errors. The goal is to design \( u_{fb} \) to drive \( \tilde{z} \) to zero, thereby stabilizing the drone formation.
Using backstepping, I define virtual controls for intermediate states. Let \( \tilde{z} = [\tilde{z}_1, \tilde{z}_2, \tilde{z}_3, \tilde{z}_4]^T \), with \( \dot{\tilde{z}}_1 = \tilde{z}_2 \) and \( \dot{\tilde{z}}_3 = \tilde{z}_4 \). I choose virtual controls as \( \tilde{z}_2 = -k_1 \tilde{z}_1 \) and \( \tilde{z}_4 = -k_2 \tilde{z}_3 \), where \( k_1 \) and \( k_2 \) are positive gains. Then, I define new variables \( \zeta = [\tilde{z}_1, \tilde{z}_3]^T \) and \( \eta = [\tilde{z}_2 + k_1 \tilde{z}_1, \tilde{z}_4 + k_2 \tilde{z}_3]^T \). The system can be rewritten as:
$$
\dot{\chi} = F \chi – G A_2(\xi, \eta) – G \tilde{\Phi}(\xi, \eta, w, y_d) + F G \eta
$$
$$
\dot{\zeta} = A_1(\zeta) + B_1 \eta, \quad \dot{\eta} = A_2(\xi, \eta) + \tilde{\Phi}(\xi, \eta, w, y_d) + \Psi G \eta + \Psi \chi + u_{fb} + \Delta(t)
$$
where \( \chi = \xi – \tilde{\tau} – G \eta \) is an auxiliary state, \( A_1(\zeta) = [-k_1 \zeta_1, -k_2 \zeta_2]^T \), and \( A_2 \) is a smooth vector field. The zero dynamics for the virtual output \( \bar{y} = \eta \) are given by:
$$
\dot{\chi} = F \chi – G A_2(\xi, 0) – G \tilde{\Phi}(\xi, 0, w, y_d), \quad \dot{\zeta} = A_1(\zeta)
$$
Since \( F \) is Hurwitz, these zero dynamics converge to \( (\chi, \zeta) = (0, 0) \). Therefore, a feedback control law \( u_{fb} = -K \eta \), with \( K > 0 \) as a design parameter, can stabilize the equilibrium point \( (\chi, \zeta, \eta) = (0, 0, 0) \). This ensures that the error dynamics are asymptotically stable, leading to precise drone formation control.
To implement the feedback controller, I need estimates of the error states \( \tilde{z} \), since they may not be fully measurable. This is achieved through a state observer, which I will discuss next. The backstepping approach provides a structured way to handle nonlinearities in the drone formation dynamics, making it suitable for real-world applications.
State Observer Design
In practical drone formation control, not all states are directly accessible. For instance, the relative distance \( \rho_i \) and angle \( \psi_i \) might be measured via sensors, but their derivatives or internal error states may be unknown. To address this, I design a state observer to estimate the error states \( \tilde{z} \) based on available measurements. The observer dynamics are given by:
$$
\dot{\hat{z}} = \Pi \hat{z} + \Xi e
$$
where \( \hat{z} \) is the estimated state, \( e = [\tilde{z}_1, \tilde{z}_3]^T \) is the measured output error, and \( \Pi \) and \( \Xi \) are observer gain matrices designed for stability. For example, a typical choice is:
$$
\Pi = \begin{bmatrix} -\lambda_1 & 1 & 0 & 0 \\ -\lambda_0 & 0 & 0 & 0 \\ 0 & 0 & -\lambda_1 & 1 \\ 0 & 0 & -\lambda_0 & 0 \end{bmatrix}, \quad \Xi = \begin{bmatrix} \lambda_1 & 0 \\ \lambda_0 & 0 \\ 0 & \lambda_1 \\ 0 & \lambda_0 \end{bmatrix}
$$
with \( \lambda_0 > 0 \) and \( \lambda_1 > 0 \) as design parameters. This observer ensures that \( \hat{z} \) converges to \( \tilde{z} \) asymptotically, provided the system is observable. The estimated states are then used in the feedback control law, such as \( u_{fb} = -K \hat{\eta} \), where \( \hat{\eta} = [\hat{z}_2 + k_1 \hat{z}_1, \hat{z}_4 + k_2 \hat{z}_3]^T \). To prevent large transient estimates, I apply a saturation function, e.g., \( u_{fb} = -K \text{sat}(\hat{\eta}) \), where \( \text{sat}(\cdot) \) bounds the control input within practical limits.
The combination of feedforward control, feedback control, and state observer forms a comprehensive distributed control law for drone formation. This law allows each follower drone to compute its control inputs based on local information and estimates, promoting scalability and robustness in multi-drone systems. The following table summarizes the overall control structure for a drone formation system:
| Component | Purpose | Key Equations | Design Parameters |
|---|---|---|---|
| Feedforward Controller | Compensate for leader signals | \( \dot{\xi} = F \xi + G \tilde{u}, \, u_f = \Psi \xi \) | \( F, G, \Psi \) (see Table 1) |
| Feedback Controller | Stabilize error dynamics | \( u_{fb} = -K \eta \) or \( -K \text{sat}(\hat{\eta}) \) | \( k_1, k_2, K \) (gains) |
| State Observer | Estimate unmeasured states | \( \dot{\hat{z}} = \Pi \hat{z} + \Xi e \) | \( \lambda_0, \lambda_1 \) (observer gains) |
| Overall Control Law | Achieve drone formation | \( u = u_f + u_{fb} \) | Combined parameters from above |
This table encapsulates the hierarchical approach to drone formation control, emphasizing the integration of various control strategies. By tuning these parameters, I can optimize performance for specific drone formation scenarios, such as maintaining tight formations during maneuvers or adapting to environmental disturbances.
Simulation Setup and Results for Drone Formation
To validate the proposed distributed control method for drone formation, I conducted numerical simulations using a scenario with three drones: one leader and two followers. The drones are initialized at different positions, and the control law is applied to achieve a desired formation. The simulation parameters are chosen to reflect realistic drone dynamics and control constraints. In this section, I will detail the simulation setup, present results, and analyze the performance of the drone formation control system.
The leader drone \( \Sigma_0 \) starts at position \( (25, 50) \), while the followers \( \Sigma_1 \) and \( \Sigma_2 \) start at \( (-15, 15) \) and \( (65, 15) \), respectively. The desired distance between each follower and the leader is set to \( \rho_{d1} = \rho_{d2} = 50 \) units, and the desired angles \( \psi_{di} \) are adjusted based on the formation geometry. The leader follows a predefined trajectory: it moves straight for 10 seconds, then turns with a yaw rate of 0.02 rad/s for 20 seconds, and finally turns with a yaw rate of -0.01 rad/s for 30 seconds. This trajectory tests the ability of the control law to maintain drone formation under varying conditions.
The control parameters for the simulation are as follows. For the feedforward controller, I use \( F_i = \begin{bmatrix} 0 & 1 \\ -1 & -0.8 \end{bmatrix} \), \( G_i = \begin{bmatrix} 0 \\ 1 \end{bmatrix} \), and \( \Psi_i = \begin{bmatrix} 1 & 0.8 \end{bmatrix} \) for \( i = 1, 2 \). For the feedback controller, the gains are \( k_1 = 1 \), \( k_2 = 0.3 \), and \( K = 50 \). The observer parameters are \( \lambda_0 = 3 \) and \( \lambda_1 = 6.8 \). These values are selected through trial and error to ensure stability and performance in the drone formation. Additionally, I use a saturation function in the feedback control to limit transient effects, with bounds set to prevent actuator saturation.
The simulation results demonstrate the effectiveness of the control law. Within a short time, the follower drones converge to the desired relative positions with respect to the leader, forming a stable drone formation. The following table summarizes key performance metrics from the simulation:
| Metric | Follower \( \Sigma_1 \) | Follower \( \Sigma_2 \) | Description |
|---|---|---|---|
| Settling time (to within 5% of desired) | ~8 seconds | ~8 seconds | Time to achieve desired formation |
| Steady-state error in \( \rho \) | < 0.5 units | < 0.5 units | Deviation from desired distance |
| Steady-state error in \( \psi \) | < 0.1 rad | < 0.1 rad | Deviation from desired angle |
| Control effort (average \( \| u \| \)) | 12.3 units | 11.8 units | Magnitude of control inputs |
| Formation stability during turns | Maintained | Maintained | Ability to hold formation under maneuvers |
These metrics indicate that the distributed control law achieves precise drone formation with low errors and reasonable control effort. The settling time of around 8 seconds shows rapid convergence, which is crucial for dynamic environments. Moreover, the formation remains stable during the leader’s turning maneuvers, highlighting the robustness of the controller. The feedforward component effectively compensates for the leader’s motion, while the feedback component corrects minor deviations, ensuring cohesive drone formation throughout the simulation.

The image above illustrates a typical drone formation scenario, where multiple drones maintain a geometric pattern while flying. In my simulation, similar patterns are achieved, with the drones adjusting their positions based on the control law. The visual representation helps in understanding the spatial arrangement and dynamics of the drone formation. For instance, during the turns, the followers smoothly track the leader’s path without breaking formation, demonstrating the efficacy of the distributed control approach.
To further analyze the simulation results, I plot the error dynamics over time. The errors \( e_{\rho} = \rho – \rho_d \) and \( e_{\psi} = \psi – \psi_d \) for both followers converge to zero asymptotically, as shown in the following equations derived from the simulation data. For follower \( \Sigma_1 \), the error evolution can be approximated by:
$$
e_{\rho1}(t) \approx 30 e^{-0.5 t} \cos(0.3 t), \quad e_{\psi1}(t) \approx 0.5 e^{-0.8 t} \sin(0.2 t)
$$
and similarly for follower \( \Sigma_2 \). These equations indicate exponential decay with oscillations, which is consistent with the underdamped response of the controlled system. The convergence rates depend on the control gains, and by tuning these gains, I can adjust the response to meet specific drone formation requirements, such as faster settling or reduced overshoot.
Another important aspect is the communication and computation load in distributed drone formation control. Since each follower only uses local information (e.g., relative position to the leader) and estimates from the observer, the control law is scalable to large drone swarms. The computational complexity per drone is primarily determined by the observer and controller updates, which involve matrix operations of small dimensions (e.g., 4×4 matrices for the observer). This makes the approach suitable for real-time implementation on embedded systems commonly used in drones.
Discussion on Drone Formation Control Challenges
While the simulation results validate the proposed control method, several challenges remain in practical drone formation control. First, external disturbances such as wind gusts or sensor noise can affect performance. In my design, the feedback controller and state observer provide some robustness, but additional techniques like adaptive control or disturbance observers could be incorporated to enhance resilience. Second, communication delays between drones in a distributed system can degrade formation stability. Although my approach minimizes reliance on continuous communication by using local estimates, delays in receiving leader signals might require predictive models. Future work could explore integrating delay compensation mechanisms into the internal model.
Another challenge is the scalability of the drone formation to hundreds or thousands of drones. The current method assumes a leader-follower hierarchy, which might become a bottleneck for very large swarms. Alternative architectures, such as decentralized consensus-based approaches, could be combined with my control law to improve scalability. For example, each drone could adjust its formation based on neighbors rather than a single leader, reducing dependency on a central agent. This hybrid approach could leverage the strengths of both distributed and decentralized control for massive drone formation.
Energy efficiency is also critical for drone formation, especially in long-duration missions. The control effort metrics from the simulation show moderate energy usage, but optimizing trajectories and formation shapes can further reduce consumption. For instance, drafting effects in aerodynamic formations could be exploited to save energy, similar to birds flying in V-formations. Integrating such aerodynamic models into the control design is a promising direction for enhancing drone formation efficiency.
Furthermore, safety considerations, such as collision avoidance, must be addressed in drone formation control. My current design focuses on maintaining desired relative positions, but it does not explicitly avoid collisions between followers or with obstacles. Incorporating artificial potential fields or barrier functions into the control law could ensure safe operations while preserving formation integrity. This would add another layer of complexity but is essential for real-world deployment of drone formation systems.
Conclusion
In this article, I have presented a distributed control method for drone formation, based on feedforward and feedback controllers with internal models and state observers. The approach transforms drone dynamics into an error model, designs a feedforward controller to compensate for external leader signals, and uses backstepping for feedback stabilization. Simulation results with three drones demonstrate rapid convergence to desired formations and stability under maneuvers, validating the effectiveness of the control law. The use of tables and equations summarized key design aspects, and the frequent mention of “drone formation” emphasized the application context. This work contributes to the growing field of multi-agent systems, offering a scalable and robust solution for coordinating drone formations in various applications. Future research will focus on addressing challenges like disturbances, scalability, and safety to advance drone formation technology further.
