Design of an Autonomous Positioning Control System for Airborne Remote Sensing Operations of Small Multi-Rotor China UAV Drones

The rapid advancement of unmanned aerial vehicle (UAV) technology has profoundly impacted numerous sectors. Among various platforms, small multi-rotor China UAV drones stand out for aerial remote sensing due to their low cost, high maneuverability, and rapid deployment capabilities. The core of achieving high-quality remote sensing missions lies in precise and reliable airborne positioning, which directly dictates the accuracy and usability of the acquired image data. However, traditional positioning methods often struggle with stability and are susceptible to interference in complex, dynamic operational environments characterized by variable lighting, wind gusts, and electromagnetic noise. As industries demand higher levels of automation, accuracy, and real-time performance from remote sensing data, there is a pressing need for an autonomous, robust, and efficient positioning control system. Such a system would enable small multi-rotor China UAV drones to perform positioning and navigation tasks independently across diverse scenarios, thereby enhancing both data quality and operational efficiency. This paper addresses this critical need by presenting the design and implementation of a comprehensive autonomous positioning control system tailored for small multi-rotor China UAV drone platforms engaged in aerial remote sensing.

The proposed system is architected to overcome the limitations of existing approaches, such as poor complex environment adaptability, low autonomous positioning accuracy, and insufficient multi-sensor collaborative control efficiency. It integrates advanced hardware with innovative algorithmic modules to form a cohesive solution. The system’s performance is validated through field experiments, demonstrating its superiority in dynamic environment suppression, high-precision positioning, and efficient task execution. This work contributes to the technological advancement of intelligent China UAV drone platforms, facilitating their scalable application in critical fields like disaster monitoring, precision agriculture, topographic mapping, and infrastructure inspection.

1. System Architecture and Hardware Design for Airborne Control

The foundation of the autonomous control system is a meticulously designed airborne operation control module. This module integrates several key hardware components to form a robust computational and sensory platform for the China UAV drone. The overall hardware system architecture is modular, ensuring reliability, real-time performance, and ease of integration.

The core computational unit is the main controller, selected for its balance of performance and power efficiency. For this system, the STM32F405RGT6 microcontroller, built around a high-performance ARM Cortex-M4 core with a Floating-Point Unit (FPU) and operating at 168 MHz, serves as the central processing hub. It executes critical flight control algorithms, including those for the dual-loop PID controller discussed later, and manages sensor data fusion. Its substantial memory (1 MB Flash, 192 KB SRAM) and rich peripheral set are crucial for handling complex tasks. The interaction design between the STM32F405RGT6 and other system components is summarized in Table 1.

Table 1: Peripheral Interaction Design of the STM32F405RGT6 Main Controller
No. Connected Component STM32F405RGT6 Peripheral(s) Used Primary Function
1 Drone Motors 2 DACs, 3x 16-channel 12-bit ADCs, 51 programmable I/O pins Generate PWM signals for motor speed control.
2 Sensors, Communication Hardware, Ground Station 3x I2C, 3x SPI, 7x USART/UART interfaces Facilitate high-speed data exchange and command communication.

An external 8 MHz crystal oscillator provides a precise clock source. Capacitors are configured around the oscillator circuit to stabilize and condition the clock signal, ensuring accurate synchronization for all digital operations within the main controller and connected peripherals.

For inertial sensing, a high-performance MEMS-based Inertial Measurement Unit (IMU) is integrated. It provides high-frequency measurements of linear acceleration and angular velocity, which are fundamental for state estimation and control. The sensor suite is expanded with complementary devices to ensure robustness and accuracy:

  • RTK-GPS Module: Provides global positioning with centimeter-level accuracy when signals are available, serving as a primary absolute positioning source and a constraint to eliminate long-term drift.
  • 3D LiDAR Sensor: Captures dense point clouds of the surrounding environment, enabling precise local mapping and feature extraction for localization in GPS-denied areas.
  • Monocular or Stereo Vision Sensor: Provides rich visual texture information, used for visual odometry and feature tracking.
  • Barometer: Delivers altitude measurements by sensing atmospheric pressure changes, crucial for height control.

The power subsystem is critical for the endurance and stability of the China UAV drone. A dedicated power management circuit is designed to provide clean, stable voltage rails to all components. As illustrated in the schematic, the circuit employs a two-stage filtering approach using CD11 aluminum electrolytic capacitors to smooth input voltage and suppress noise. A linear voltage regulator (AMS1117-3.3) steps down the input to a stable 3.3V for the core logic, including the STM32 controller. Furthermore, a high-efficiency switching regulator (LM5017) is incorporated for power-hungry components, optimizing overall system energy consumption and extending flight time—a key advantage for practical applications of China UAV drones.

For robust communication with the ground control station (GCS), the compact and cost-effective EC800M wireless communication module is selected. It supports 4G LTE connectivity, ensuring reliable long-range telemetry and command transmission even in areas without direct radio line-of-sight, which is essential for beyond-visual-line-of-sight (BVLOS) operations of China UAV drones.

2. Core Algorithmic Modules for Positioning and Control

2.1. Laser-Visual-Inertial Tightly-Coupled Positioning

A primary innovation of this system is the “Laser-Visual-Inertial” tightly-coupled positioning algorithm. It fuses data from LiDAR, vision, and the IMU within a unified optimization framework to overcome the limitations of loose coupling or single-sensor approaches. This method is particularly effective for China UAV drones operating in complex, feature-varying environments. The process involves three main steps: IMU pre-integration, joint feature extraction, and tightly-coupled optimization.

Step 1: MEMS IMU Pre-integration. To handle the inherent asynchrony between lower-frequency LiDAR/vision frames and the high-frequency IMU data, motion increments between sensor frames are pre-integrated within the IMU body frame. For the interval between times \(i\) and \(j\), the relative position change \(\Delta \mathbf{a}_{ij}\), velocity change \(\Delta \mathbf{u}_{ij}\), and rotation change \(\Delta \mathbf{b}_{ij}\) are calculated as:
$$
\begin{aligned}
\Delta \mathbf{a}_{ij} &= \sum_{l=i}^{j-1} \left[ \Delta \mathbf{a}_l + \mathbf{G}_l \left( \hat{\boldsymbol{\alpha}}_l – \boldsymbol{\beta}_{c,l} \right) \Delta t^2 / 2 \right] \\
\Delta \mathbf{u}_{ij} &= \sum_{l=i}^{j-1} \mathbf{G}_l \left( \hat{\boldsymbol{\alpha}}_l – \boldsymbol{\beta}_{c,l} \right) \Delta t \\
\Delta \mathbf{b}_{ij} &= \prod_{l=i}^{j-1} \text{Exp}\left( \left( \hat{\boldsymbol{\chi}}_l – \boldsymbol{\beta}_{g,l} \right) \Delta t \right)
\end{aligned}
$$
where \(\Delta t\) is the IMU sampling period, \(\mathbf{G}_l\) is the rotation matrix from the IMU to world frame at time \(l\), \(\hat{\boldsymbol{\alpha}}_l\) and \(\hat{\boldsymbol{\chi}}_l\) are the raw accelerometer and gyroscope measurements, \(\boldsymbol{\beta}_{c,l}\) and \(\boldsymbol{\beta}_{g,l}\) are the accelerometer and gyroscope biases, and \(\text{Exp}(\cdot)\) is the exponential map for quaternions. The true linear acceleration \(\boldsymbol{\alpha}_l\) and angular velocity \(\boldsymbol{\chi}_l\) are derived from the raw measurements and biases:
$$
\boldsymbol{\alpha}_l = \mathbf{G}_l^T \mathbf{g} + (\hat{\boldsymbol{\alpha}}_l – \boldsymbol{\beta}_{c,l} + \mathbf{m}_a), \quad \boldsymbol{\chi}_l = \hat{\boldsymbol{\chi}}_l + \boldsymbol{\beta}_{g,l} + \mathbf{m}_g
$$
where \(\mathbf{g}\) is the gravity vector, and \(\mathbf{m}_a, \mathbf{m}_g\) represent Gaussian white noise.

Step 2: Laser-Visual Feature Joint Extraction. Heterogeneous features are extracted from LiDAR point clouds and visual images. For LiDAR, each point \(\mathbf{s}_k\) is analyzed by calculating the curvature \(z_k\) of its local neighborhood \(M_k\):
$$
z_k = \frac{1}{|\mathbf{s}_k| \cdot |M_k|} \left\| \sum_{\mathbf{s}_i \in M_k} (\mathbf{s}_i – \mathbf{s}_k) \right\|
$$
Points are classified as edge features if \(z_k > z_{\text{edge}}\) and as plane features if \(z_k < z_{\text{plane}}\), where \(z_{\text{edge}}\) and \(z_{\text{plane}}\) are predefined thresholds. These features are transformed to the world frame using the current LiDAR pose estimate \(\boldsymbol{\epsilon}_{L \rightarrow W}\) and used to build/update a local map \(\mathcal{N}_t\): \(\mathbf{s}_{k,i}^W = \boldsymbol{\epsilon}_{L \rightarrow W} \cdot \mathbf{s}_{k,i}^L\).

For vision, Oriented FAST and Rotated BRIEF (ORB) features are detected and tracked across consecutive frames using the Lucas-Kanade optical flow method:
$$
\min_{\Delta X, \Delta Y} \sum_{(X,Y) \in w(\mathbf{g}_{c,j}^t)} \left[ F_{t+1}(X+\Delta X, Y+\Delta Y) – F_t(X, Y) \right]^2
$$
where \(w(\mathbf{g}_{c,j}^t)\) is a local window around feature point \(\mathbf{g}_{c,j}^t\) in frame \(t\), and \(F\) represents image intensity. The 3D positions of matched visual features are recovered via triangulation. Correspondences are established between visual feature points \(\{\boldsymbol{\psi}_{l,i}\}\) and their geometrically associated LiDAR feature points \(\{\boldsymbol{\psi}_{c,j}\}\) in the local map.

Step 3: Tightly-Coupled Optimization. All sensor data is jointly optimized within a sliding window to estimate the optimal state \(\boldsymbol{\zeta}_k^*\) of the China UAV drone. The optimization minimizes a sum of robustified residuals:
$$
\begin{aligned}
\boldsymbol{\zeta}_k^* = \underset{\boldsymbol{\zeta}}{\arg\min} & \left\{ \frac{1}{2} \sum_{i} \rho \left( \left\| \boldsymbol{\upsilon}_{L,i}(\boldsymbol{\zeta}) \right\|_{\boldsymbol{\Sigma}_L}^2 \right) + \frac{1}{2} \sum_{j} \rho \left( \left\| \boldsymbol{\upsilon}_{C,j}(\boldsymbol{\zeta}) \right\|_{\boldsymbol{\Sigma}_C}^2 \right) \right. \\
& \left. + \frac{1}{2} \sum_{k} \rho \left( \left\| \boldsymbol{\upsilon}_{I,k}(\boldsymbol{\zeta}) \right\|_{\boldsymbol{\Sigma}_I}^2 \right) + \frac{1}{2} \rho \left( \left\| \boldsymbol{\upsilon}_{G,k}(\boldsymbol{\zeta}) \right\|_{\boldsymbol{\Sigma}_G}^2 \right) \right\}
\end{aligned}
$$
where \(\rho(\cdot)\) is the Huber robust kernel, and \(\boldsymbol{\upsilon}_{L,i}, \boldsymbol{\upsilon}_{C,j}, \boldsymbol{\upsilon}_{I,k}, \boldsymbol{\upsilon}_{G,k}\) are the residuals for the LiDAR, visual, IMU pre-integration, and RTK-GPS measurements, respectively, with \(\boldsymbol{\Sigma}\) denoting their covariance matrices. The LiDAR residual for a plane feature is the point-to-plane distance: \(\boldsymbol{\upsilon}_{L,i}(\boldsymbol{\zeta}) = \mathbf{m}_i^T (\boldsymbol{\zeta} \cdot \boldsymbol{\psi}_{c,j} + \mathbf{t}_t)\), where \(\mathbf{m}_i\) is the plane’s normal vector. The visual residual is the reprojection error: \(\boldsymbol{\upsilon}_{C,j}(\boldsymbol{\zeta}) = \boldsymbol{\sigma}_{c,j}^t – \pi(\boldsymbol{\zeta} \cdot \boldsymbol{\psi}_{l,i} + \mathbf{t}_t)\), with \(\pi(\cdot)\) as the projection function. The IMU residual is derived from the pre-integration terms: \(\boldsymbol{\upsilon}_{I,k}(\boldsymbol{\zeta}) = [\Delta \mathbf{a}_{ij}^T, \Delta \mathbf{u}_{ij}^T, \Delta \mathbf{b}_{ij}^T]^T\). When available, the RTK-GPS residual \(\boldsymbol{\upsilon}_{G,k}(\boldsymbol{\zeta}) = \boldsymbol{\zeta} – \boldsymbol{\rho}_{gps,k}\) provides a strong global constraint, effectively correcting cumulative drift. The state vector \(\boldsymbol{\zeta}\) is defined as:
$$
\boldsymbol{\zeta} = [X, Y, Z, \theta_R, \theta_P, \theta_Y]^T
$$
representing the 3D position \((X, Y, Z)\) and attitude (roll \(\theta_R\), pitch \(\theta_P\), yaw \(\theta_Y\)) of the China UAV drone. The optimization is solved iteratively using the Levenberg-Marquardt algorithm, yielding a high-accuracy, robust pose estimate.

2.2. Dual-Loop PID Control for Enhanced Stability

To translate the estimated state into stable and responsive flight, a dual-loop Proportional-Integral-Derivative (PID) controller is designed. This architecture is chosen over a single-loop PID to better reject disturbances like wind gusts and sensor noise, which are common challenges for small China UAV drones. The controller comprises three independent sub-controllers for attitude, position, and height, each structured as a cascade of two PID loops.

The outer loop controllers take the error between the desired state (command) and the estimated current state from the positioning module. For height control, the input is \(\varsigma_c = \varsigma_h – \varsigma_p\), where \(\varsigma_h\) is the measured height from the barometer and \(\varsigma_p\) is the desired height. For attitude and position, the inputs are the Euler angle errors and position errors, respectively. The output of each outer-loop PID at time \(t\) is computed as:
$$
\text{PID}_{OO}^{(n)}(t) = K_P \cdot \boldsymbol{\Theta}_t + K_I \cdot \sum \boldsymbol{\Theta}_t \Delta t + K_D \cdot \frac{\boldsymbol{\Theta}_t – \boldsymbol{\Theta}_{t-1}}{\Delta t}, \quad n=1,2,3
$$
where \(\boldsymbol{\Theta}_t\) represents the input error vector at time \(t\), and \(K_P, K_I, K_D\) are the proportional, integral, and derivative gains tuned for the outer loop.

This outer-loop output then serves as the setpoint for the corresponding inner-loop PID controller. The inner loop generates the final control output, which is typically translated into motor thrust commands. Its calculation is:
$$
\text{PID}_{IO}^{(n)}(t) = K_P’ \cdot \text{PID}_{OO}^{(n)}(t) + K_I’ \cdot \sum \text{PID}_{OO}^{(n)}(t) \Delta t + K_D’ \cdot \frac{\text{PID}_{OO}^{(n)}(t) – \text{PID}_{OO}^{(n)}(t-1)}{\Delta t}
$$
where \(K_P’, K_I’, K_D’\) are the inner-loop gains. The parameters for both loops are meticulously tuned using a combination of experimental trial-and-error and the Ziegler-Nichols method to ensure stable, agile, and precise control of the China UAV drone under various flight conditions.

2.3. Integrated Global-Local Path Planning for Navigation and Obstacle Avoidance

For autonomous navigation in cluttered environments, a hybrid path planning algorithm combining the A* algorithm for global planning and the Dynamic Window Approach (DWA) for local reactive obstacle avoidance is implemented. This fusion ensures that the China UAV drone can follow an efficient overall path while dynamically avoiding unforeseen obstacles.

Global Path Planning with A*: Given a prior map or a coarse mission plan, the A* algorithm plans an optimal path from the start \(f_0\) to the goal \(f_{goal}\). It uses a cost function \(f(f) = g(f) + h(f)\) to evaluate nodes. The actual cost from the start to the current node \(f\) is \(g(f) = \sum_{i=0}^{v-1} D(f_i, f_{i+1})\), where \(D\) is the Euclidean distance. The heuristic cost to the goal is \(h(f) = \sqrt{(f_{\eta} – f_{goal,\eta})^2 + (f_{\iota} – f_{goal,\iota})^2}\), guiding the search efficiently. Key turning points on this global path are identified and marked as segment boundaries.

Local Obstacle Avoidance with DWA: Within each segment defined by key nodes, the DWA performs real-time local planning. It generates a dynamic window \(U\) of achievable velocities \((\lambda, \mu)\) (linear and angular) within the next interval, considering kinematic constraints and acceleration limits:
$$
U = U_a \cap U_s = \{ (\lambda, \mu) | \lambda \in [\lambda_{\text{min}}, \lambda_{\text{max}}], \mu \in [\mu_{\text{min}}, \mu_{\text{max}}] \} \cap \{ (\lambda, \mu) | \dot{\lambda} \leq A_{\text{max}}, \dot{\mu} \leq B_{\text{max}} \}
$$
For each admissible velocity pair, a short-term trajectory is simulated. These trajectories are evaluated using a multi-objective scoring function \(\varrho\) that balances progress towards a local sub-goal (aligned with the global path), speed, and distance from obstacles:
$$
\varrho = \omega_1 \cdot \mathbf{z}_w + \omega_2 \cdot \text{heading}(\mathbf{b}_{\text{align}}) + \omega_3 \cdot \text{dist}(c_{\text{goal}}) + \omega_4 \cdot \text{dist}(c_{\text{obs}})
$$
where \(\omega_{1-4}\) are weighting coefficients, \(\mathbf{z}_w\) is the average speed, \(\mathbf{b}_{\text{align}}\) is the alignment error to the local goal, and \(c_{\text{goal}}, c_{\text{obs}}\) are distances to the goal and the nearest obstacle, respectively. The trajectory with the highest score \(\varrho\) is selected for execution. The process repeats until the segment endpoint is reached, after which the planner switches to the next global path segment. This approach enables the China UAV drone to navigate complex, partially unknown environments safely and efficiently.

3. Experimental Validation and Performance Analysis

The proposed system was integrated into a commercial quadcopter platform (a representative small multi-rotor China UAV drone) and tested in a challenging natural environment characterized by complex terrain, variable wind conditions (2-5 m/s, with gusts over 8 m/s), and dense vegetation in the Hubei Province, China. The mission involved autonomous remote sensing for vegetation monitoring. Key system parameters during testing are listed in Table 2.

Table 2: Key Experimental Parameters for System Validation
Parameter Category Parameter Value Parameter Value
Positioning & Control IMU Δt 2 ms Gravity g 9.80665 m/s²
LiDAR \(z_{\text{edge}}\) 0.6 LiDAR \(z_{\text{plane}}\) 0.2
Neighborhood Radius 0.5 m Outer-loop \(K_P\) (Pos) 1.5
Inner-loop \(K_P’\) (Att) 8.0
Path Planning \(\lambda_{\text{max}}\) 5 m/s \(A_{\text{max}}\) 2 m/s²
\(\mu_{\text{max}}\) π/2 rad/s \(B_{\text{max}}\) π/4 rad/s²
Local Goal Threshold 5 m Segment Switch Threshold 0.5 m
\(\omega_1\) (Speed) 0.5 \(\omega_2\) (Heading) 0.3
\(\omega_3\) (Goal) 0.15 \(\omega_4\) (Obstacle) 0.05

The system’s performance was evaluated against two state-of-the-art methods: a depth-camera-based fully autonomous obstacle avoidance system (Method A) and a machine-vision-based automated inspection and positioning control technique (Method B). Three key metrics were used:

  1. Dynamic Environment Suppression Index (DESI): Measures the system’s ability to maintain stable operation under dynamic disturbances like wind and EM noise. It is calculated from a function of real-time interference intensity \(I\) and system response delay \(T_d\): \(\text{DESI} = \varsigma_1 \cdot f(I) + \varsigma_2 \cdot T_d\).
  2. Autonomous Positioning Accuracy: Evaluated by comparing the system’s estimated 3D position \((X, Y, Z)\) and attitude \((\theta_R, \theta_P, \theta_Y)\) against ground truth data.
  3. Task-Cognition Synergy Ratio (TCR): Quantifies the efficiency gain from multi-sensor collaboration by comparing task completion time and energy consumption between the proposed multi-sensor mode and a baseline single-sensor mode: \(\text{TCR} = \frac{\alpha(T_s – T_y) + \beta(E_s – E_y)}{\alpha + \beta}\).

The experimental results, consolidated from multiple test runs, are summarized in Table 3 and the analysis below.

Table 3: Consolidated Experimental Results Comparison
Performance Metric Proposed System Method A (Depth Camera) Method B (Machine Vision) Analysis
Average DESI > 0.84 ~ 0.72 ~ 0.65 The proposed system’s tightly-coupled fusion provides the strongest resilience to dynamic environmental interference, maintaining high stability.
Positioning Error (RMSE) < 0.15 m, < 1° ~ 0.25 m, ~ 2° ~ 0.8 m, > 5° The laser-visual-inertial fusion and RTK-GPS constraint yield the highest positioning accuracy, closest to ground truth.
Average TCR > 0.80 ~ 0.45 ~ 0.30 Efficient sensor synergy and optimized path planning significantly reduce task time and energy consumption.

Analysis of DESI Results: The proposed system consistently achieved a DESI above 0.84, significantly outperforming both comparison methods. This superior performance is attributed to the robust, tightly-coupled sensor fusion architecture. When one sensor (e.g., vision in low-light or LiDAR in featureless areas) is degraded, the complementary sensors (IMU, other modality) provide continued reliable data, allowing the state estimator to maintain accuracy. The dual-loop PID controller further enhances disturbance rejection. This makes the China UAV drone highly reliable for operations in gusty winds or areas with intermittent sensor challenges.

Analysis of Positioning Accuracy: As shown in the quantitative comparison, the root-mean-square error (RMSE) for both position and attitude was lowest for the proposed system. The hybrid “Laser-Visual-Inertial” approach effectively mitigates the drift of pure visual odometry and the noise of pure LiDAR odometry. The inclusion of RTK-GPS measurements as a global optimization constraint successfully bounds the accumulated error, a critical factor for long-duration missions of China UAV drones. The result is a precise and consistent pose estimate essential for generating high-quality, georeferenced remote sensing products.

Analysis of TCR Results: The proposed system’s TCR remained above 0.8, indicating a substantial efficiency gain from its integrated design. The multi-sensor system allows for faster state convergence and more confident decision-making, reducing the need for exploratory or corrective maneuvers. The hybrid A*-DWA path planner ensures the drone follows a near-optimal global path while smoothly avoiding local obstacles, minimizing total flight distance and time. Consequently, the China UAV drone completes its remote sensing tasks faster and with lower energy expenditure compared to systems relying on simpler sensing or planning strategies.

4. Conclusion

This paper presented the design, implementation, and validation of a sophisticated autonomous positioning control system for small multi-rotor China UAV drones engaged in aerial remote sensing operations. The system integrates a carefully selected hardware suite with three core algorithmic innovations: a laser-visual-inertial tightly-coupled positioning module, a dual-loop PID control module, and a hybrid global-local navigation and obstacle avoidance planner. Extensive field testing in complex natural environments demonstrates that the system effectively addresses the key challenges of poor adaptability, low positioning accuracy, and inefficient control. It exhibits superior dynamic environment suppression (DESI > 0.84), achieves high-precision autonomous positioning, and realizes significant gains in operational efficiency through multi-sensor synergy (TCR > 0.8).

The success of this system underscores the importance of a holistic, tightly-integrated approach to China UAV drone autonomy. It moves beyond relying on a single sensing modality or a decoupled control structure, offering a robust solution ready for demanding real-world applications such as precision agriculture, emergency response, and large-scale topographic surveys. Future work will focus on further optimizing the algorithm’s computational efficiency for embedded deployment, enhancing collaboration within drone swarms, and extending the system’s capability to operate seamlessly in entirely GPS-denied and highly dynamic urban environments. The continuous development of such intelligent systems is pivotal for solidifying the role of China UAV drone technology as a transformative tool in the modern geospatial and remote sensing landscape.

Scroll to Top