Autonomous Landing Attitude Control for Quadcopter Drones Using TLD Algorithm

In complex outdoor environments, quadcopter drones often face challenges during autonomous landing, such as temporary occlusion of targets or targets moving out of the field of view, leading to tracking failures. To address these issues, we propose an autonomous landing attitude control method for quadcopter drones based on the Tracking-Learning-Detection (TLD) algorithm. This method integrates extended Kalman filtering with the TLD framework to detect and track specific targets through multiple median flow trackers. By accurately capturing target position information, we combine an additional inertia term crowd search algorithm and active disturbance rejection control (ADRC) to modify search step sizes and directional inertia coefficients, thereby optimizing the flight attitude of the quadcopter and enhancing the stability and safety of the landing process. Experimental results demonstrate that our approach achieves an average center offset of less than 1.98 pixels during autonomous landing, with roll, pitch, and yaw angles closely matching desired values. The deviations are within 0.02 degrees, indicating superior performance and ensuring safe landing for the quadcopter.

The core of our method lies in the fusion of TLD algorithm and extended Kalman filter for robust target tracking, followed by advanced control strategies for attitude regulation. The TLD algorithm comprises three modules: tracking, learning, and detection. The tracking module employs a median flow tracker based on Lucas-Kanade optical flow, which initializes a bounding box around the target and uses a 10×10 grid of feature points. Each point is tracked individually, and tracking errors are mitigated by discarding points with high forward-backward errors. The detection module scans the image with sliding windows at various scales, utilizing a cascade classifier (variance classifier, ensemble classifier, and nearest neighbor classifier) to identify candidate regions. The learning module updates the detector online using positive and negative samples generated by P-N experts, ensuring adaptability to target appearance changes. When combined with the extended Kalman filter, the system predicts target positions during occlusions, maintaining continuous tracking. The state prediction and update equations of the extended Kalman filter are as follows:

$$X_{k|k-1} = E_{k|k-1} X_{k-1|k-1}$$

$$P_{k|k-1} = E_{k|k-1} P_{k-1|k-1} E_{k|k-1}^T + Q_{k-1}$$

$$K_k = P_{k|k-1} H_k^T (H_k P_{k|k-1} H_k^T + R_k)^{-1}$$

$$X_{k|k} = X_{k|k-1} + K_k (Z_k – H_k X_{k|k-1})$$

$$P_{k|k} = (I – K_k H_k) P_{k|k-1}$$

Here, $X_{k|k-1}$ represents the predicted state estimate, $P_{k|k-1}$ is the predicted error covariance, $K_k$ is the Kalman gain, $H_k$ is the measurement matrix, and $Q_k$ and $R_k$ are process and measurement noise covariances, respectively. This integration ensures accurate target localization even under dynamic conditions.

For attitude control, we employ an ADRC strategy optimized by an additional inertia term crowd search algorithm. The ADRC consists of a tracking differentiator (TD), an extended state observer (ESO), and a nonlinear state error feedback (NLSEF). The TD provides a smooth reference signal and its derivative, with its discrete form given by:

$$u_1(t+1) = u_1(t) + h u_2(t)$$

$$u_2(t+1) = u_2(t) + h \cdot \text{fhan}(u_1(t) – u_0(t), u_2(t), r, h_0)$$

where $u_1(t)$ tracks the desired signal $u_0(t)$, $u_2(t)$ is the derivative, $r$ is the tracking factor, $h$ is the step size, and $\text{fhan}$ is a nonlinear function. The ESO estimates system states and disturbances:

$$e(t) = z_1(t) – y(t)$$

$$z_1(t+1) = z_1(t) + h (z_2(t) – \beta_{01} e(t))$$

$$z_2(t+1) = z_2(t) + h (z_3(t) – \beta_{02} \text{fal}(e(t), \alpha_1, \delta) + b u(t))$$

$$z_3(t+1) = z_3(t) + h (-\beta_{03} \text{fal}(e(t), \alpha_2, \delta))$$

Here, $z_1(t)$ and $z_2(t)$ estimate the system states, $z_3(t)$ estimates the total disturbance, $\beta_{01}$, $\beta_{02}$, $\beta_{03}$ are observer gains, and $\text{fal}$ is a nonlinear function. The NLSEF generates the control signal:

$$e_1(t) = u_1(t) – z_1(t)$$

$$e_2(t) = u_2(t) – z_2(t)$$

$$u_0(t) = \beta_1 \text{fal}(e_1(t), \alpha_1, \delta) + \beta_2 \text{fal}(e_2(t), \alpha_2, \delta)$$

$$u(t) = \frac{u_0(t) – z_3(t)}{b}$$

To optimize the ADRC parameters ($\beta_1$, $\beta_2$, $\beta_{01}$, $\beta_{02}$, $\beta_{03}$), we use an improved crowd search algorithm with an additional inertia term. The algorithm initializes a population of individuals and updates their positions based on fitness evaluation. The step size and direction are modified using inertia to enhance convergence. The position update is given by:

$$x_{ij}(t+1) = x_{ij}(t) + d_{ij}(t) \cdot N(0,1)$$

$$d_{ij}(t) = p_{ij} \cdot \phi_{ij}$$

where $x_{ij}(t)$ is the position of the $j$-th individual in dimension $i$, $d_{ij}(t)$ is the step size, $p_{ij}$ is a linear membership function, $\phi_{ij}$ is the fitness value, and $N(0,1)$ is a standard normal random number. This approach ensures rapid and accurate parameter tuning for the quadcopter’s attitude control.

We conducted experiments using a MATLAB-based simulation model of the quadcopter drone. The key parameters are summarized in Table 1.

Table 1: Quadcopter Drone Parameters
Parameter Value
Anti-torque coefficient (N·m·s·rad⁻²) 1.0 × 10⁻⁶
Mass (kg) 0.70
Lift coefficient (N·s⁻²·rad⁻²) 1.0 × 10⁻⁵
Moment of inertia (x-axis) (kg·m²) 7.6 × 10⁻³
Moment of inertia (y-axis) (kg·m²) 7.6 × 10⁻³
Moment of inertia (z-axis) (kg·m²) 1.5 × 10⁻³
Distance from rotor center to body center (m) 0.25
Gravitational acceleration (m/s²) 9.8

The quadcopter was set to take off from a height of 5 meters and land at a speed of 0.2 m/s. The image acquisition frame rate was 30 fps, with a TLD learning rate of 0.01. The extended Kalman filter noise covariance was set to 0.05, and the attitude control cycle was 0.01 seconds. The additional inertia term weight was 0.3, and the ADRC observer bandwidth was 5 Hz.

During autonomous landing, the TLD algorithm successfully tracked the target even under partial occlusions. The average center offset, calculated as the Euclidean distance between the tracked and actual target positions, was used to evaluate tracking performance. For multiple test datasets, the results are shown in Table 2.

Table 2: Average Center Offset Across Datasets
Dataset Average Center Offset (pixels)
01 1.98
02 1.45
03 1.12
04 0.99
05 1.23

The average center offset across all datasets was below 1.98 pixels, demonstrating high tracking accuracy. For attitude control, the roll, pitch, and yaw angles were monitored during landing. The desired values and deviations are summarized in Table 3.

Table 3: Attitude Angle Deviations
Angle Desired Value (degrees) Maximum Deviation (degrees)
Roll 0.00 0.018
Pitch 0.00 0.015
Yaw 0.00 0.020

The quadcopter maintained stable attitude with deviations within 0.02 degrees, ensuring a smooth landing. The control performance was further validated by comparing our method with existing approaches, such as adaptive backstepping and integral iterative learning control. Our method showed superior stability and accuracy, with minimal oscillations in attitude angles.

In conclusion, the integration of TLD algorithm and extended Kalman filtering provides robust target tracking for quadcopter drones during autonomous landing. The ADRC optimized by the improved crowd search algorithm ensures precise attitude control. Future work will focus on enhancing the algorithm’s robustness under extreme weather conditions and varying lighting. The proposed method offers a reliable solution for autonomous landing of quadcopter drones in complex environments.

Scroll to Top