In this article, I address the trajectory tracking control problem for a quadcopter unmanned aerial vehicle (UAV) under external disturbances, model uncertainties, and time-varying state constraints. The quadcopter, a popular UAV due to its agility and versatility, faces significant challenges in maintaining stable flight within constrained environments, such as obstacle-rich areas or restricted airspace. Traditional control methods often struggle to handle these constraints effectively, especially when the constraints vary over time. To overcome these issues, I propose an adaptive dynamic surface control scheme based on neural networks, which ensures that the quadcopter’s position states remain within predefined time-varying bounds while achieving accurate trajectory tracking. The approach involves transforming the state constraints into bounded problems via nonlinear transformations, estimating uncertainties using neural networks, and designing robust control laws for both position and attitude subsystems. Through extensive simulations, I demonstrate the superiority of this method over conventional PD and dynamic surface control techniques, highlighting its enhanced disturbance rejection and constraint adherence capabilities.

The dynamics of a quadcopter are inherently nonlinear and coupled, making control design complex. The position dynamics can be described by the following equations, where the quadcopter’s motion in inertial coordinates is governed by forces from thrust and drag, along with disturbances. Let $\zeta_p = [x, y, z]^T$ represent the position vector in the inertial frame. The position dynamics are given by:
$$ \ddot{\zeta}_p = -\frac{1}{m} A \dot{\zeta}_p – g + v_p + \Delta_p $$
where $m$ is the mass of the quadcopter, $g$ is the gravitational acceleration vector, $A = \text{diag}(a_1, a_2, a_3)$ is the air resistance coefficient matrix, $v_p$ is the position control input, and $\Delta_p$ represents lumped disturbances and model uncertainties. The control input $v_p$ is derived from the thrust forces and rotation matrix. Similarly, the attitude dynamics for the quadcopter, represented by Euler angles $\zeta_a = [\phi, \theta, \psi]^T$, follow:
$$ \dot{\zeta}_a = \Phi \omega $$
$$ J \dot{\omega} = -\omega \times J \omega + v_a + \Delta_a $$
Here, $\omega$ is the angular velocity, $J$ is the inertia matrix, $v_a$ is the attitude control moment vector, and $\Delta_a$ denotes external disturbances. The matrix $\Phi$ converts angular rates to Euler angle rates. For trajectory tracking, the objective is to ensure that the quadcopter follows a desired path $\zeta_{pd} = [x_d, y_d, z_d]^T$ and a desired yaw angle $\psi_d$, while adhering to time-varying constraints: $x_L \leq x \leq x_U$, $y_L \leq y \leq y_U$, $z_L \leq z \leq z_U$, where the bounds $x_L, x_U$, etc., are functions of time. To handle these constraints, I apply a nonlinear transformation to the position states, converting the constrained problem into an unconstrained one by defining new states $\xi_p = [\xi_{p1}, \xi_{p2}, \xi_{p3}]^T$ as follows:
$$ \xi_{p1} = \frac{x}{(-x_L + x)(x_U – x)} $$
$$ \xi_{p2} = \frac{y}{(-y_L + y)(y_U – y)} $$
$$ \xi_{p3} = \frac{z}{(-z_L + z)(z_U – z)} $$
This transformation ensures that if $\xi_p$ remains bounded, the original states $x, y, z$ will satisfy the constraints. The derivatives of these new states involve terms that depend on the time-varying bounds, leading to complex dynamics that require careful control design. For instance, the time derivative of $\xi_{p1}$ is:
$$ \dot{\xi}_{p1} = h_{x1}(x, x_L, x_U) \dot{x} + h_{x2}(x, x_L, x_U) $$
where $h_{x1}$ and $h_{x2}$ are nonlinear functions derived from the transformation. Similar expressions hold for $\xi_{p2}$ and $\xi_{p3}$. This formulation allows me to focus on controlling the transformed states while implicitly enforcing the constraints on the quadcopter’s position.
To design the controller, I adopt an adaptive dynamic surface control approach that incorporates radial basis function (RBF) neural networks to estimate and compensate for uncertainties. The control structure is divided into position and attitude loops. For the position control, I define tracking errors $\epsilon_{p1} = \xi_{pd} – \xi_p$ and $\epsilon_{p2} = \hat{\alpha}_p – \dot{\zeta}_p$, where $\alpha_p$ is a virtual control law and $\hat{\alpha}_p$ is its filtered estimate obtained through a first-order filter: $\epsilon \dot{\hat{\alpha}}_p + \hat{\alpha}_p = \alpha_p$, with $\hat{\alpha}_p(0) = \alpha_p(0)$. The filter error is $\sigma_p = \hat{\alpha}_p – \alpha_p$. The uncertainties $\Delta_1 = -h_1 \sigma_p$ and $\Delta_2 = -\Delta_p$ are approximated using neural networks:
$$ \Delta_{1,i} = \Theta_{1,i}^{*T} S_{1,i} + \rho_{1,i} $$
$$ \Delta_{2,i} = \Theta_{2,i}^{*T} S_{2,i} + \rho_{2,i} $$
where $\Theta_{1,i}^*$ and $\Theta_{2,i}^*$ are ideal weights, $S_{1,i}$ and $S_{2,i}$ are basis functions, and $\rho_{1,i}$, $\rho_{2,i}$ are approximation errors. The adaptive laws for updating the weight estimates $\hat{W}_{1,i}$ and $\hat{W}_{2,i}$ are designed to ensure stability. The virtual control law $\alpha_p$ and the actual control input $v_p$ are given by:
$$ \alpha_p = h_1^{-1} (\dot{\xi}_{pd} + K_1 \epsilon_{p1} – h_2) + h_1^{-1} \alpha_{ps} \epsilon_{p1} $$
$$ v_p = \dot{\hat{\alpha}}_p + g + \frac{1}{m} A \dot{\zeta}_p + K_2 \epsilon_{p2} + h_1 \epsilon_{p1} + v_{ps} \epsilon_{p2} $$
Here, $K_1$ and $K_2$ are positive definite gain matrices, and $\alpha_{ps}$, $v_{ps}$ are terms involving neural network estimates to counteract uncertainties. The stability analysis using Lyapunov theory shows that all signals remain bounded, and the position constraints are satisfied. For the attitude control, particularly the yaw channel, I define errors $\epsilon_\psi = \psi – \psi_d$ and $\epsilon_{\omega_z} = \hat{\alpha}_{\omega_z} – \omega_z$, with a similar filter for the virtual control. The control moment $U_4$ for yaw is derived as:
$$ U_4 = J_z \dot{\hat{\alpha}}_{\omega_z} – (J_y – J_x) \omega_x \omega_y + k_4 J_z \epsilon_{\omega_z} – J_z H_{\psi1} \epsilon_\psi + J_z \left( \frac{1}{2} a_4^2 + \frac{\hat{W}_4 \|S_4\|^2}{2a_4^2} \right) \epsilon_{\omega_z} $$
where $H_{\psi1} = \cos(\theta) \sec(\phi)$, and adaptive laws update the neural network weights $\hat{W}_3$ and $\hat{W}_4$. The roll and pitch channels are stabilized using PD controllers for simplicity, ensuring overall attitude stability.
To validate the proposed control scheme, I conducted simulations comparing it with PD control and standard dynamic surface control. The quadcopter parameters were set with nominal values differing from actual ones to simulate model uncertainties, and external disturbances were included. The desired trajectory and time-varying constraints were defined as sinusoidal functions to test the controller’s robustness. The simulation parameters are summarized in the table below:
| Parameter | Value | Description |
|---|---|---|
| Mass (m) | 1 kg (actual), 2 kg (nominal) | Quadcopter mass |
| Inertia (J) | diag(0.003, 0.005, 0.005) kg·m² (actual) | Moment of inertia |
| Gravity (g) | [0, 0, 9.8] m/s² | Gravitational acceleration |
| Air resistance (A) | diag(0.0005, 0.0005, 0.0005) kg/s | Drag coefficients |
| Disturbances | 0.3 + 0.1 sin(t) N (force), 0.0005 sin(t) N·m (moment) | External disturbances |
| Control gains | K₁ = diag(2.5, 2.5, 0.55), K₂ = diag(1.2, 1.2, 1.2) | Position control gains |
The desired trajectory for the quadcopter was set as $x_d = 4\sin(0.5t) + \cos(0.1t)$ m, $y_d = 4\sin(0.5t)$ m, and $z_d = 2 + 0.5\sin(0.2t) + 0.1t$ m, with time-varying constraints defined by exponential and sinusoidal functions. For example, the x-position bounds were $x_U = 2e^{-0.1t} + 0.15 + 4\sin(0.5t) + \cos(0.1t)$ m and $x_L = -2e^{-0.1t} – 0.15 + 4\sin(0.5t) + \cos(0.1t)$ m. The initial conditions were $[x(0), y(0), z(0)] = [0.5, 1.9, 1]$ m and zero initial velocities. The neural network parameters included adaptation gains $\gamma_{1,i} = \gamma_{2,i} = 0.5$ and basis function widths $a_1 = a_2 = a_3 = a_4 = 1$.
The simulation results demonstrated that the proposed adaptive dynamic surface control method outperformed the others in terms of tracking accuracy and constraint adherence. The position tracking errors for the quadcopter are summarized in the following table, which shows the steady-state errors and convergence times for each method:
| Control Method | X-Position Error (m) | Y-Position Error (m) | Z-Position Error (m) | Yaw Error (rad) | Convergence Time (s) |
|---|---|---|---|---|---|
| PD Control | 0.20 | 0.18 | 0.10 | 0.05 | >15 |
| Dynamic Surface Control | 0.05 | 0.05 | 0.05 | 0.01 | ~10 |
| Proposed Adaptive Method | 0.02 | 0.02 | 0.02 | 0.005 | ~5 |
As evident, the proposed method achieved faster convergence and smaller errors, ensuring that the quadcopter’s position remained within the constraints throughout the simulation. For instance, the x-position tracking error converged to within 0.05 m in approximately 1.48 s with the adaptive method, compared to 13.37 s with standard dynamic surface control. The adaptive parameters, such as $\hat{W}_{1,i}$ and $\hat{W}_{2,i}$, evolved over time to compensate for uncertainties, as shown in the Lyapunov analysis. The key stability conditions derived from the Lyapunov function $V_{p2}$ for the position controller are:
$$ \dot{V}_{p2} \leq -c_1 V_{p2} + c_2 $$
where $c_1 = \max\{2\|K_1\|_\infty, 2\|K_2\|_\infty, \gamma_{1,1}, \gamma_{1,2}, \gamma_{1,3}, \gamma_{2,1}, \gamma_{2,2}, \gamma_{2,3}\}$ and $c_2 = \sum_{i=1}^3 \left( \frac{a_1^2}{2} + \frac{a_2^2}{2} + \frac{1}{2a_1^2} \rho_{1,i}^2 + \frac{1}{2a_2^2} \rho_{2,i}^2 + \frac{1}{2} \gamma_{1,i} W_{1,i}^2 + \frac{1}{2} \gamma_{2,i} W_{2,i}^2 \right)$. This guarantees that all errors remain bounded, and the quadcopter satisfies the state constraints. Similarly, for the attitude control, the Lyapunov function $V_{\psi2}$ ensures boundedness of yaw tracking errors.
In conclusion, the adaptive dynamic surface control scheme with neural networks effectively addresses the trajectory tracking problem for a quadcopter under time-varying state constraints, model uncertainties, and external disturbances. The method transforms constraints into manageable bounds, estimates uncertainties online, and delivers robust performance. Simulations confirm that the quadcopter maintains stable tracking within the desired constraints, outperforming traditional methods. Future work could explore real-time implementation and extension to multi-quadcopter systems.
