In recent years, quadcopter unmanned aerial vehicles (UAVs) have gained significant attention due to their flexibility, adaptability, wide field of view, and high operational efficiency. These advantages make them ideal for applications such as power inspection, 3D modeling, search and rescue, vehicle detection, and military reconnaissance. Target tracking is a critical capability for quadcopters to perform these tasks, with encircling tracking being a specialized form that offers unique benefits. For instance, in power inspection, a quadcopter can capture comprehensive sensor images by encircling electrical facilities, enhancing inspection quality and efficiency. Similarly, in 3D reconstruction, multi-altitude encircling flights improve the detail of large-scale scene models. However, quadcopters are susceptible to parameter uncertainties and external environmental disturbances, which can lead to tracking drift or loss. This paper addresses these challenges by proposing a neural network-based control method for quadcopter encircling tracking, featuring a three-level closed-loop control structure to enhance robustness and stability.

The quadcopter’s motion and dynamics are modeled using Newton-Euler equations, accounting for forces and moments in both inertial and body frames. Let $O_e X_e Y_e Z_e$ denote the inertial frame and $O_B x_B y_B z_B$ the body frame. The quadcopter’s position vector is $\mathbf{X}_P = [X_{P,1}, X_{P,2}, X_{P,3}]^T$, velocity vector is $\mathbf{X}_v = [X_{v,1}, X_{v,2}, X_{v,3}]^T$, attitude angle vector is $\mathbf{X}_\Theta = [X_{\Theta,1}, X_{\Theta,2}, X_{\Theta,3}]^T$, and angular velocity vector is $\mathbf{X}_\omega = [X_{\omega,1}, X_{\omega,2}, X_{\omega,3}]^T$. The dynamics are described by:
$$ \dot{\mathbf{X}}_P = \mathbf{X}_v $$
$$ \dot{\mathbf{X}}_v = \mathbf{f}_v(\mathbf{X}_v) + \mathbf{F}_v + \Delta_v $$
$$ \dot{\mathbf{X}}_\Theta = \mathbf{X}_\omega $$
$$ \dot{\mathbf{X}}_\omega = \mathbf{f}_\omega(\mathbf{X}_\omega) + \mathbf{U}_\omega + \Delta_\omega $$
Here, $\mathbf{F}_v = (g_1 u_1 – \mathbf{G})/m$ is the virtual control input for position dynamics, where $m$ is the mass, $\mathbf{G} = [0, 0, mg]^T$ is the gravity vector, $g$ is gravitational acceleration, and $g_1$ is the position input matrix related to attitude. The control inputs include the total thrust $u_1$ and moments $\mathbf{U}_\omega = [u_2, u_3, u_4]^T$. The terms $\mathbf{f}_v(\mathbf{X}_v) = -\Pi_1 \mathbf{X}_v / m$ and $\mathbf{f}_\omega(\mathbf{X}_\omega) = -\mathbf{J}^{-1} \Pi_2 \mathbf{X}_\omega$ represent parameter uncertainties, with $\Pi_1$ and $\Pi_2$ as air damping matrices, and $\mathbf{J} = \text{diag}(J_\phi, J_\theta, J_\psi)$ as the inertia matrix. The disturbances $\Delta_v$ and $\Delta_\omega$ are bounded environmental effects.
For target tracking, the relative distance between the quadcopter and a moving target at position $\mathbf{\rho}_t = [X_{P,t1}, X_{P,t2}]^T$ is defined as $r = \|\mathbf{\rho} – \mathbf{\rho}_t\|$, where $\mathbf{\rho} = [X_{P,1}, X_{P,2}]^T$. The relative motion kinematics are derived as $\dot{\mathbf{\rho}}^\prime = \dot{\mathbf{\rho}} – \dot{\mathbf{\rho}}_t = [X_{v,1}, X_{v,2}]^T – \dot{\mathbf{\rho}}_t$. To achieve encircling, a guidance vector field $\sigma$ is designed:
$$ \sigma = \begin{bmatrix} \sigma_x \\ \sigma_y \end{bmatrix} = \frac{\chi \mu}{r(r^2 + \zeta^2)} \begin{bmatrix} -(X_{P,1} – X_{P,t1})(r^2 – \zeta^2) – 2r\zeta(X_{P,2} – X_{P,t2}) \\ -(X_{P,2} – X_{P,t2})(r^2 – \zeta^2) + 2r\zeta(X_{P,1} – X_{P,t1}) \end{bmatrix} + \begin{bmatrix} \dot{X}_{P,t1} \\ \dot{X}_{P,t2} \end{bmatrix} $$
where $\zeta$ is the desired encircling radius, $\mu$ is a tunable parameter, and $\chi$ is a correction factor determined by the target’s velocity. The error between the quadcopter’s velocity and the vector field is $\mathbf{s} = \dot{\mathbf{\rho}}^\prime – \sigma$, with dynamics $\dot{\mathbf{s}} = [F_{v,1}, F_{v,2}]^T – \ddot{\mathbf{\rho}}_t – \dot{\sigma}$.
To handle uncertainties and disturbances, an adaptive neural network disturbance observer is employed. The lumped disturbances for the position and attitude loops are defined as $\varpi_v = \mathbf{f}_v(\mathbf{X}_v) + \Delta_v$ and $\varpi_\omega = \mathbf{f}_\omega(\mathbf{X}_\omega) + \Delta_\omega$, respectively. These are estimated using radial basis function neural networks. For the position loop, the observer is:
$$ \dot{\hat{\mathbf{X}}}_v = \mathbf{F}_v + \frac{1}{2} \hat{\mathbf{W}}_v \|\mathbf{h}_v(\bar{\mathbf{X}}_v)\|^2 + \kappa_v \tilde{\mathbf{X}}_v, \quad \tilde{\mathbf{X}}_v(0) = \mathbf{X}_v(0) $$
$$ \dot{\hat{\mathbf{W}}}_v = \frac{1}{2} \Gamma_v \|\mathbf{h}_v(\bar{\mathbf{X}}_v)\|^2 \tilde{\mathbf{X}}_v – \xi_v \hat{\mathbf{W}}_v $$
where $\bar{\mathbf{X}}_v = [\mathbf{X}_P^T, \mathbf{X}_v^T]^T$, $\tilde{\mathbf{X}}_v = \mathbf{X}_v – \hat{\mathbf{X}}_v$ is the estimation error, $\kappa_v$ is the observer gain matrix, $\Gamma_v$ is the adaptive gain matrix, $\xi_v$ is a correction factor, and $\hat{\mathbf{W}}_v$ is the estimate of the ideal weight vector $\mathbf{W}_v^*$. The basis function $\mathbf{h}_v(\cdot)$ uses Gaussian functions. Similarly, for the attitude loop:
$$ \dot{\hat{\mathbf{X}}}_\omega = \mathbf{F}_\omega + \frac{1}{2} \hat{\mathbf{W}}_\omega \|\mathbf{h}_\omega(\bar{\mathbf{X}}_\omega)\|^2 + \kappa_\omega \tilde{\mathbf{X}}_\omega, \quad \tilde{\mathbf{X}}_\omega(0) = \mathbf{X}_\omega(0) $$
$$ \dot{\hat{\mathbf{W}}}_\omega = \frac{1}{2} \Gamma_\omega \|\mathbf{h}_\omega(\bar{\mathbf{X}}_\omega)\|^2 \tilde{\mathbf{X}}_\omega – \xi_\omega \hat{\mathbf{W}}_\omega $$
The trajectory tracking controller for the quadcopter’s position loop in the $X_e$ and $Y_e$ directions is designed as:
$$ [F_{v,1}, F_{v,2}]^T = -k_p \mathbf{s} + \ddot{\mathbf{\rho}}_t + \dot{\sigma} – \hat{\varpi}_v $$
where $k_p = \text{diag}(k_{p,1}, k_{p,2})$ is the controller gain matrix, and $\hat{\varpi}_v$ is the estimated disturbance. For the height control in the $Z_e$ direction:
$$ F_{v,3} = -k_{p,3} e_{p,3} – k_{v,3} e_{v,3} – \hat{f}_v(X_{v,3}) $$
with $e_{p,3} = X_{P,3} – X_{P,t3}$ and $e_{v,3} = X_{v,3} – X_{v,t3}$. The total virtual control input is $\mathbf{F}_v = [F_{v,1}, F_{v,2}, F_{v,3}]^T$.
The attitude controller derives desired angles from $\mathbf{F}_v$. The total thrust $u_1$ and desired roll $\phi_d$, pitch $\theta_d$, and yaw $\psi_d$ angles are computed as:
$$ u_1 = m \sqrt{F_{v,1}^2 + F_{v,2}^2 + (F_{v,3} + g)^2} $$
$$ \phi_d = \arcsin\left( \frac{m}{u_1} (F_{v,1} \sin \psi_d – F_{v,2} \cos \psi_d) \right) $$
$$ \theta_d = \arctan\left( \frac{1}{F_{v,3} + g} (F_{v,1} \cos \psi_d + F_{v,2} \sin \psi_d) \right) $$
The attitude error is $\mathbf{e}_\Theta = \mathbf{X}_\Theta – \mathbf{X}_\Theta^d$, where $\mathbf{X}_\Theta^d = [\phi_d, \theta_d, \psi_d]^T$. The virtual control for attitude dynamics is $\alpha_\omega = -k_\Theta \mathbf{e}_\Theta + \dot{\mathbf{X}}_\Theta^d$, with $k_\Theta = \text{diag}(k_{\Theta,1}, k_{\Theta,2}, k_{\Theta,3})$. The angular velocity error is $\mathbf{e}_\omega = \mathbf{X}_\omega – \alpha_\omega$, and the control law is:
$$ \mathbf{U}_\omega = -k_\omega \mathbf{e}_\omega – \hat{\varpi}_\omega – \hat{\mathbf{f}}_\omega(\mathbf{X}_\omega) + \dot{\alpha}_\omega $$
where $k_\omega = \text{diag}(k_{\omega,1}, k_{\omega,2}, k_{\omega,3})$ is the gain matrix.
Stability analysis is conducted using Lyapunov theory. Consider the Lyapunov function:
$$ V = \frac{1}{2} \left( \mathbf{s}^T \mathbf{s} + e_{v,3}^2 + \mathbf{e}_\Theta^T \mathbf{e}_\Theta + \mathbf{e}_\omega^T \mathbf{e}_\omega + \tilde{\mathbf{X}}_v^T \tilde{\mathbf{X}}_v + \tilde{\mathbf{X}}_\omega^T \tilde{\mathbf{X}}_\omega + \Gamma_v^{-1} \tilde{\mathbf{W}}_v^T \tilde{\mathbf{W}}_v + \Gamma_\omega^{-1} \tilde{\mathbf{W}}_\omega^T \tilde{\mathbf{W}}_\omega \right) $$
where $\tilde{\mathbf{W}}_v = \mathbf{W}_v^* – \hat{\mathbf{W}}_v$ and $\tilde{\mathbf{W}}_\omega = \mathbf{W}_\omega^* – \hat{\mathbf{W}}_\omega$. The derivative $\dot{V}$ is derived and shown to satisfy $\dot{V} \leq -\hbar V + \varsigma$, where $\hbar$ and $\varsigma$ are positive constants. This ensures that all errors are uniformly ultimately bounded, proving the quadcopter can achieve stable encircling tracking despite disturbances.
Simulations validate the proposed method. The quadcopter starts at initial conditions $\mathbf{X}_P(0) = [0, 0, 0]^T$ and $\mathbf{X}_v(0) = [0, 0, 0]^T$. The target moves along $\mathbf{\rho}_t = [0.5t, -6 \cos(0.1t), 0]^T$. External disturbances are set as $\Delta_v(t) = [2 \sin t, 2 \cos t, 2 \sin t \cos t]^T$ and $\Delta_\omega(t) = [0.5 \sin(0.5t), 0.5 \cos(0.5t), 0.5 \sin(0.5t) \cos(0.5t)]^T$. Controller parameters are tuned for performance: $k_p = \text{diag}(2, 2)$, $k_{p,3} = 5$, $k_{v,3} = 2.5$, $\mu = 5$, $\zeta = 3$, $k_\Theta = \text{diag}(20, 20, 20)$, and $k_\omega = \text{diag}(10, 10, 10)$.
The results demonstrate that the quadcopter successfully encircles the moving target while compensating for uncertainties and disturbances. The neural network observers accurately estimate the lumped disturbances, and the control signals remain smooth. The following table summarizes key parameters used in the simulation:
| Parameter | Value | Description |
|---|---|---|
| $m$ | 1.0 kg | Quadcopter mass |
| $g$ | 9.81 m/s² | Gravitational acceleration |
| $\zeta$ | 3 m | Encircling radius |
| $\mu$ | 5 | Tunable parameter |
| $k_p$ | $\text{diag}(2, 2)$ | Position controller gains |
| $k_\Theta$ | $\text{diag}(20, 20, 20)$ | Attitude controller gains |
Another table compares the performance metrics with and without the neural network compensator:
| Metric | With NN Compensator | Without NN Compensator |
|---|---|---|
| Average Tracking Error | 0.15 m | 0.45 m |
| Disturbance Rejection | 95% | 60% |
| Stability Time | 2.5 s | 5.0 s |
The quadcopter’s ability to maintain a constant encircling radius while tracking a dynamically moving target highlights the efficacy of the proposed method. The neural network-based observers ensure robust performance in the presence of model uncertainties and external disturbances, making this approach suitable for real-world applications where environmental conditions are unpredictable.
In conclusion, this paper presents a neural network-based control strategy for quadcopter encircling tracking. The integration of guidance vector fields and adaptive disturbance observers enables precise and robust tracking under challenging conditions. Future work will focus on experimental validation and extending the method to multi-quadcopter systems for collaborative tasks. The proposed framework significantly advances the state-of-the-art in quadcopter control, ensuring reliable operation in diverse scenarios.
