Trajectory Prediction and Interception Algorithm for Large Maneuvering Multi-rotor UAV

In recent years, the rapid development and widespread adoption of multirotor drones have introduced significant management and security challenges, particularly in restricted areas such as airports. Unauthorized drones operating in these zones pose serious risks, and traditional countermeasures often fail against drones capable of denied flight, such as those using Visual-Inertial Odometry (VIO) or LiDAR-Inertial Odometry (LIO) navigation. Existing methods, including GPS spoofing and electromagnetic jamming, suffer from limitations such as low interception success rates, short effective ranges, and inability to handle high-speed, high-maneuverability targets. Moreover, emerging solutions based on multirotor platforms for precise interception remain largely theoretical or validated only for low-speed scenarios. This paper addresses these gaps by proposing a novel trajectory prediction and interception algorithm designed to counter high-maneuverability multirotor drones in denied environments. Our approach integrates low-cost omnidirectional perception through camera and LiDAR fusion, combined with an Extended Kalman Filter (EKF) for accurate target state estimation. We introduce a trajectory prediction method for hit-point estimation and a dynamic control authority allocation technique to enable maximum-speed interception. Experimental results demonstrate that our algorithm achieves a target lock success rate exceeding 90% and an interception success rate over 85% for drones moving at speeds up to 12 m/s, showcasing its effectiveness in real-world scenarios.

The core of our method lies in a robust navigation and control architecture. For navigation, we leverage a combination of visual and LiDAR sensors to achieve cost-effective, omnidirectional perception. By fusing high-frequency, low-accuracy visual data with low-frequency, high-accuracy LiDAR data, we obtain precise target position estimates and tracking. For control, we employ a dynamic authority allocation strategy based on trajectory prediction to effectively counter large maneuvering drones. This integrated approach ensures reliable interception even under aggressive evasion maneuvers. In the following sections, we detail the target position estimation process, the trajectory prediction interception mechanism, experimental validation, and a comparative analysis with existing methods.

Target Position Estimation

Accurate target position estimation is fundamental to successful interception. We assume the system is affected by zero-mean Gaussian white noise with known measurement covariances and use an EKF to estimate the target’s position. A linear constant velocity model with six state variables describes the target’s motion characteristics. Visual and LiDAR sensors provide low-cost omnidirectional perception, and the EKF estimates the target position and corresponding covariance.

Visual Target Detection and LiDAR Target Detection

We utilize YOLO-V5-n for visual detection due to its efficiency and fast processing speed, which meets the real-time requirements of our platform. The model outputs bounding boxes in pixel coordinates, which are converted to estimate real-world positions. LiDAR operates by emitting laser beams and analyzing the reflected signals to obtain precise position and attitude information as point cloud data. However, point clouds have poor edge characteristics, making direct target identification challenging. Thus, we use the visual detection bounding boxes to determine the target’s azimuth angle, cluster LiDAR points in that direction, and compute the centroid as the target’s real coordinates, achieving centimeter-level accuracy.

The LiDAR perception process is summarized as follows: visual detection provides the azimuth, LiDAR clustering yields the centroid, and this data is fused for estimation. Key sensor parameters are listed in Table 1.

Table 1: Sensor Parameters for Target Detection
Sensor Update Rate Measurements
Camera 30 Hz Position
LiDAR 10 Hz Position

Fused Target Position Estimation

Visual measurements, derived from deep learning inference on bounding boxes, estimate real-world coordinates based on pixel positions and camera parameters. Differentiating these positions provides velocity estimates. Although visual data is high-frequency, it has significant errors; LiDAR data offers high accuracy but at a lower frequency. We combine these using a sequential EKF to achieve high-frequency, high-precision target position estimation. The fused estimation model processes visual and LiDAR measurements sequentially to update the state estimate.

The process model for a rigid body target defines the state vector as $\mathbf{x} = [p_x, v_x, p_y, v_y, p_z, v_z]^T \in \mathbb{R}^6$, where $\mathbf{p} = [p_x, p_y, p_z]^T$ represents position and $\mathbf{v} = [v_x, v_y, v_z]^T$ represents velocity. The continuous process model is:

$$ \dot{\mathbf{p}} = \mathbf{v} $$
$$ \dot{\mathbf{v}} = \mathbf{n}_a $$

where $\mathbf{n}_a$ is process noise. The discrete nonlinear equation is:

$$ \mathbf{x}_{k+1} = \mathbf{F} \mathbf{x}_k + \mathbf{n}_k $$
$$ \mathbf{F} = \begin{bmatrix} \mathbf{I} & T_s \mathbf{I} \\ \mathbf{0} & \mathbf{I} \end{bmatrix} $$

Here, $\mathbf{x}_{k+1}$ is the discrete state vector at time $k+1$, $\mathbf{F}$ is the state transition matrix, and $T_s$ is the sampling time.

The measurement model handles each sensor separately. For visual measurements, the imaging principle gives:

$$ \mathbf{p}_j = G(u, v, f, \text{Size}_{\text{real}}) $$

where $u$ and $v$ are pixel coordinates, $f$ is the focal length, and $\text{Size}_{\text{real}}$ is the actual target size, yielding the position estimate $\mathbf{p}_j$. The measurement model is:

$$ \mathbf{z}_k = h(\mathbf{x}_k) + \mathbf{n}_m $$

where $\mathbf{z}_k$ is the measurement output, and $\mathbf{n}_m$ is measurement noise with covariance $\mathbf{R}_k$.

The EKF involves initialization, prior estimation, and measurement update. The prior state estimate and covariance propagation are:

$$ \hat{\mathbf{x}}_{k|k-1} = \mathbf{F} \hat{\mathbf{x}}_{k-1|k-1} $$
$$ \hat{\mathbf{P}}_{k|k-1} = \mathbf{F} \hat{\mathbf{P}}_{k-1|k-1} \mathbf{F}^T + \mathbf{Q}_{k-1} $$

where $\hat{\mathbf{x}}_{k|k-1}$ is the prior state estimate, $\hat{\mathbf{P}}_{k|k-1}$ is the prior covariance, and $\mathbf{Q}_{k-1}$ is process noise covariance. The measurement update is:

$$ \mathbf{K}_k = \hat{\mathbf{P}}_{k|k-1} \mathbf{H}_k^T (\mathbf{H}_k \hat{\mathbf{P}}_{k|k-1} \mathbf{H}_k^T + \mathbf{R}_k)^{-1} $$
$$ \hat{\mathbf{x}}_{k|k} = \hat{\mathbf{x}}_{k|k-1} + \mathbf{K}_k [\mathbf{z}_k – h(\hat{\mathbf{x}}_{k|k-1})] $$
$$ \hat{\mathbf{P}}_{k|k} = (\mathbf{I} – \mathbf{K}_k \mathbf{H}_k) \hat{\mathbf{P}}_{k|k-1} $$

where $\mathbf{K}_k$ is the Kalman gain, $\mathbf{H}_k$ is the observation matrix, and $\mathbf{R}_k$ is measurement noise covariance.

For LiDAR sequential measurement updates with delay compensation, we adjust the state estimate to account for sensor latency:

$$ \hat{\mathbf{x}}_{k|k} = \hat{\mathbf{x}}_{k|k-1} + \mathbf{K}_k [\mathbf{z}_k – \hat{\mathbf{z}}_k] $$
$$ \hat{\mathbf{z}}_k = h(\hat{\mathbf{x}}_{k|k-1 – T_{\text{latency}}}) $$

where $T_{\text{latency}}$ is the aiding sensor latency. This fusion approach enhances estimation accuracy by approximately 30%, as demonstrated in simulations.

Trajectory Prediction Interception

When a multirotor drone faces interception threats, it typically executes maximum acceleration maneuvers in its body frame, such as full roll, pitch, or lift, approximating circular motion. Our trajectory prediction method estimates the hit point and employs dynamic control authority allocation for maximum-speed interception.

Hit-Point Prediction

We discretize the target’s trajectory. Let $k$ denote the time step, $\mathbf{p}_k$ the target position, and $\mathbf{V}_k$ the velocity. The target position at step $k+1$ is:

$$ \mathbf{p}_{k+1} = \mathbf{p}_k + \mathbf{V}_k \cdot \Delta t $$

The position at step $k$ can be expressed integrally as:

$$ \mathbf{p}_k = \mathbf{p}_0 + \sum_{n=1}^{k} \mathbf{V}_n \cdot \Delta t $$

Interception occurs after $n$ steps when the interceptor reaches the target:

$$ v_{\text{interceptor}} \cdot n \cdot \Delta t = d $$

where $d$ is the distance. In practice, the interceptor accelerates to a preset speed $v_l$ and maintains constant velocity. The target position at step $k$ is $\mathbf{p}_k = (x_k, y_k, z_k)$, and for any step $k$, the distance covered by the interceptor is:

$$ v_l \cdot k \cdot \Delta t = d_k $$

Let the current time be $k \Delta t$, and $k \Delta t + T$ be the interception time. Solving for $T$ yields:

$$ aT^2 + bT + c = 0 $$
$$ a = v_l^2 – v_{kx}^2 – v_{ky}^2 – v_{kz}^2 $$
$$ b = 2 v_l k \Delta t (x_k v_{kx} + y_k v_{ky} + z_k v_{kz}) $$
$$ c = v_l^2 T^2 – (x_k^2 + y_k^2 + z_k^2) $$

Solving this quadratic equation gives the remaining steps $T$ to interception. Future target velocities are estimated using a least-squares-based velocity prediction matrix $\mathbf{R}$:

$$ \mathbf{V}_{k+1} = \mathbf{R} \mathbf{V}_k $$

Dynamic Control Authority Allocation Method

To eliminate position error between the interceptor’s current position and the predicted hit point while maintaining maximum thrust speed, we design a dynamic control authority allocation strategy. The interceptor is controlled to fly toward the predicted hit point at maximum speed $v_{\text{max}}$. The velocity command is:

$$ \mathbf{v}_{\text{target}} = \mathbf{v}_{\text{predict}} $$
$$ \mathbf{v}_{\text{error}} = \mathbf{v}_{\text{target}} – \mathbf{v}_{\text{interceptor}} $$

For a multirotor drone, the thrust signal for each actuator is:

$$ T_i = \eta_x T_{\text{roll},i} + \eta_y T_{\text{pitch},i} + \eta_z T_{\text{yaw},i} + a_1 T_{\text{tot}} + a_2 T_{\text{hover},i} $$

where $T_i$ is the thrust for actuator $i$, $\eta$ coefficients prioritize control channels, and $a_1$, $a_2$ are dynamic allocation parameters. Yaw and vertical channels have slower natural responses and require lower bandwidth, while roll and pitch channels are critical for stability. We optimize $a_1$ and $a_2$ via boundary search: if an actuator’s thrust exceeds limits, we reduce $a_1$ or $a_2$ to alleviate stress on attitude channels. The algorithm iteratively adjusts these parameters to maintain control symmetry and resist disturbances, as outlined in Table 2.

Table 2: Dynamic Control Authority Allocation Algorithm
Step Action
1 Initialize $a_1 = 1$, $a_2 = 1$
2 For each actuator, check thrust limits
3 If thrust > max and total thrust > 0, compute $\lambda_i$
4 Update $a_2$ as min($\lambda$)
5 For each actuator, if thrust > max and yaw thrust > 0, compute $\lambda_i$
6 Update $a_1$ as min($\lambda$)
7 Repeat until parameters stabilize

This approach ensures that control signals remain symmetric under vibrations, enhancing disturbance rejection and stable control.

Experimental Setup and Result Analysis

We conducted experiments to validate our method’s feasibility, accuracy, and practicality. First, we simulated the fused position estimation in Python to verify its effectiveness. Second, we performed real-world interception tests using a high-speed UVC camera and Livox Mid-360 LiDAR for sensing, Jetson Orin NX as the onboard AI platform, ROS for communication, and PX4 as the flight controller. Data visualization used RVIZ, and flight data was analyzed in MATLAB. Due to cost constraints, we employed balloon interception, a common method in drone countermeasure research. The interceptor was a Unionsys-350 drone (350mm wheelbase, capable of carrying ≥300g payload at high speeds), and the target was a Yunzhou MX450 drone. Commercial drones like DJI Mavic 3 and Yunzhou MX450 have maximum speeds of approximately 12 m/s; our interceptor reached 20 m/s but was limited to 12 m/s for testing.

The experimental architecture included sensors mounted on the drone for omnidirectional perception, and the interceptor was equipped with the proposed algorithm. We tested against a 12 m/s spiraling maneuver simulation. The interceptor took off, locked onto the target, predicted its trajectory, and successfully intercepted it. The target lock success rate at 50% confidence was 99.4%. Figure 7 shows target locking during interception, Figure 8 displays the predicted hit point (green) and velocity control point (purple, 5m behind the hit point), and Figure 9 illustrates the interceptor and target trajectories during interception. The process involved three phases: initial takeoff with rough hit-point prediction, terminal phase with refined prediction, and successful interception. LiDAR sensing ensured omnidirectional locking after visual acquisition, and fused estimation enabled high precision.

Table 3 compares our method with existing approaches in terms of speed, perception angle, and lock success rate. Our algorithm supports higher interception speeds and full 360° perception. Table 4 presents success rates from simulations and real flights. Simulations achieved 100% success, while real flights had >85% first-attempt success due to balloon instability under wind and propeller wash. However, omnidirectional LiDAR perception allowed re-planning and subsequent successful interceptions, achieving 100% overall success.

Table 3: Comparison with Existing Interception Methods
Interception Method Interception Speed Perception Angle Lock Success Rate
Visual Servoing 2 m/s 120° 75.23%
Electro-Optical Pod + PN Guidance <8 m/s 120°
Fused Perception + Trajectory Prediction >12 m/s 360° 99.4%
Table 4: Success Rates in Simulation and Real Experiments
Scenario First-Attempt Success Rate Overall Success Rate
Simulation 100% 100%
Real Flight >85% 100%

The fused estimation improved accuracy by approximately 30% in simulations, as shown in Figure 6, where LiDAR data compensated for visual errors. The trajectory prediction effectively handled high-speed maneuvers, and dynamic control allocation maintained stability during interception.

Conclusion

This paper presents a trajectory prediction and interception algorithm for large maneuvering multirotor drones, addressing limitations of existing methods in high-maneuverability scenarios. Our target detection solution offers compact, low-cost omnidirectional perception compared to traditional electro-optical pods, with lower weight (under 300g versus 1kg) and reduced computational load. By fusing visual and LiDAR data through EKF, we achieve high-precision compensation beyond visual-only approaches. The trajectory prediction method and dynamic control authority allocation enable effective interception at maximum speeds, validated through experiments. Results confirm that our approach reliably counters high-speed denied drones, with a lock success rate over 90% and interception success rate above 85%. Innovations include sensor fusion for perception, trajectory prediction for accuracy, and a cost-effective payload configuration, advancing both performance and practicality in drone countermeasures. Future work will focus on scaling the method for faster targets and integrating additional sensors for enhanced robustness in diverse environments.

Scroll to Top