In modern geospatial applications, the demand for high-precision large-scale topographic maps is critical for urban planning, infrastructure development, and environmental monitoring. Traditional surveying methods often struggle with inefficiencies, high labor costs, and limitations in complex terrains. To address these challenges, I have developed an innovative approach leveraging multirotor drone technology integrated with LiDAR systems. This method enhances data acquisition accuracy, reduces planar errors, and adapts to diverse environments, offering a cost-effective solution for detailed topographic mapping. The core of this research involves multi-source data fusion, feature point matching, and advanced algorithms to generate reliable 1:500 scale maps. By utilizing a multirotor drone platform, I ensure flexibility in data collection, especially in densely vegetated or obstructed areas, where conventional techniques fail. This article details the methodology, experimental validation, and results, emphasizing the role of multirotor drones in achieving superior mapping outcomes.
The data acquisition phase employs a multirotor drone equipped with an airborne LiDAR system, which includes a 3D laser scanner, GPS, inertial navigation system (INS), and a high-resolution digital camera. The multirotor drone, specifically a hexacopter model, provides stable flight capabilities, allowing for precise laser measurements. The LiDAR system operates by emitting laser pulses and calculating distances based on the time-of-flight principle. The distance \( L \) between the sensor and the target is computed using the formula: $$L = \frac{V \cdot \Delta t}{2}$$ where \( V \) represents the speed of light in the atmosphere, and \( \Delta t \) is the time interval between emission and reception. Combined with GPS-determined positions and INS-derived attitude parameters, this setup captures detailed point cloud data and imagery. The multirotor drone’s ability to hover and maneuver at low altitudes ensures high-density data collection, even in challenging terrains. Key technical specifications of the multirotor drone system are summarized in Table 1.
| Parameter | Value |
|---|---|
| Drone Weight (kg) | 4.7 |
| Total Weight (kg) | 6.75 |
| Maximum Flight Speed (m/s) | 12 |
| Flight Altitude Range (m) | 0–1500 |
| Wind Resistance (Level) | 6 |
| Laser Scanning Types | Multi-prism, Fiber-optic |
Following data acquisition, the fusion of drone imagery and LiDAR data is crucial for enhancing feature extraction. I apply a low-dimensional polynomial (ALP) based feature extraction method to fit pixel values and detect extremum points in scale space. The scale space representation \( K \) is defined as: $$K = \delta^2 \cdot m \cdot f_i$$ where \( \delta \) is the scale parameter, \( m \) denotes the Laplace operator, and \( f_i \) represents the discrete sub-pixel point. Using a Gaussian pyramid, extremum points \( P(X, Y, \delta) \) are calculated as: $$P(X, Y, \delta) = \chi_0(X, Y) + \chi_1(X, Y) + \chi_2(X, Y) \cdot \delta^2 + \chi_3(X, Y) \cdot \delta^3$$ Here, \( \chi_0 \) to \( \chi_3 \) are weighted summation coefficients. For sub-pixel accuracy, a Gaussian difference function localizes critical points, and the dominant direction is determined by partitioning the image into 36 sub-regions. The maximum gradient amplitude \( Q(X, Y) \) and direction \( \phi(X, Y) \) are computed as: $$Q(X, Y) = \sqrt{(K(X+1, Y) – K(X, Y))^2 + (K(X, Y+1) – K(X, Y))^2}$$ $$\phi(X, Y) = \arctan\left(\frac{K(X+1, Y) – K(X, Y)}{K(X, Y+1) – K(X, Y)}\right)$$ Feature points are selected based on scale, distance, and orientation similarity. Subsequently, I employ a random sample consensus (RANSAC) method to match LiDAR data with processed drone images, ensuring robust alignment. This fusion process generates 3D point cloud data enriched with RGB textures, facilitating accurate vector editing in later stages. The transformation for data fusion is expressed as: $$(X’, Y’, 1) = \alpha(X, Y, \delta) = (X, Y, \delta) \cdot \begin{bmatrix} q_{11} & q_{12} & q_{13} \\ q_{21} & q_{22} & q_{23} \\ q_{31} & q_{32} & 1 \end{bmatrix}$$ where \( \alpha \) is the feature point set, and \( q \) denotes transformation matrix parameters.

For topographic map compilation, I extract multiple coordinate points and perform straight-line fitting of structural contours using the least squares principle. The fitting equation is: $$(X”, Y”, 1) = \lambda(X’, Y’, 1) + h$$ where \( \lambda \) is the fitting coefficient, and \( h \) is the plane residual. This approach minimizes the sum of squared vertical distances from points to the fitted plane, ensuring precise spatial feature point matching. The multirotor drone’s high-resolution data supports this step by providing dense point clouds. I then utilize CASS 10.1 software integrated with an AutoCAD platform to automate map generation. The “one-click mapping” function processes the fused data, diluting redundant points based on preset scale requirements (e.g., 1:500). Manual contour tracing and symbol placement adhere to standard cartographic conventions, resulting in a comprehensive topographic map comprising digital elevation models (DEMs), digital orthophoto maps (DOMs), and digital line graphs (DLGs). The multirotor drone’s adaptability allows for efficient updates and edits, reducing storage overhead by eliminating unused elements.
To validate the method, I conducted experiments in a residential area with varied topography, including buildings and vegetation. The multirotor drone was deployed over a 375 m altitude plain, capturing 72 images across 5 flight paths with 17 control points. Control measurements adhered to strict standards: plane control networks used GNSS-RTK with adjacent point errors within ±5 cm, and elevation control relied on third-order leveling with errors below ±3 cm in flat areas. Control points were evenly distributed, with at least one point per 100 m × 100 m in complex zones. The multirotor drone’s parameters, as listed in Table 1, ensured consistent performance. I compared the proposed method with two existing approaches: one using a V10 platform with DV-LiDAR10 (Reference [1]) and another employing a Zhiheng drone with LiDAR (Reference [2]). The evaluation focused on planar error reduction and feature accuracy. Table 2 summarizes the planar errors across 12 test regions, demonstrating the superiority of the multirotor drone-based method.
| Region | Proposed Method | Reference [1] | Reference [2] |
|---|---|---|---|
| 1 | 90 | 850 | 600 |
| 2 | 110 | 900 | 650 |
| 3 | 150 | 920 | 620 |
| 4 | 95 | 870 | 580 |
| 5 | 120 | 950 | 670 |
| 6 | 130 | 930 | 640 |
| 7 | 100 | 890 | 610 |
| 8 | 140 | 960 | 660 |
| 9 | 160 | 970 | 630 |
| 10 | 105 | 940 | 600 |
| 11 | 125 | 910 | 590 |
| 12 | 120 | 880 | 570 |
The results confirm that the multirotor drone LiDAR method achieves significantly lower planar errors, typically below 160 mm², compared to Reference [1] (up to 970 mm²) and Reference [2] (up to 670 mm²). This improvement stems from the multirotor drone’s ability to capture unobstructed data in vegetated and complex areas, coupled with the ALP-based fusion algorithm that enhances feature matching accuracy by approximately 60%. The generated maps exhibit complete feature representation, with building corners, road networks, and vegetation boundaries accurately rendered. In contrast, Reference [1] showed data gaps in dense foliage, and Reference [2] suffered from boundary distortions due to inadequate fusion techniques. The multirotor drone’s stability in windy conditions (up to level 6 wind resistance) further contributed to consistent data quality, whereas other methods were prone to deviations. Overall, the multirotor drone-based approach demonstrates robust performance in diverse environments, making it ideal for large-scale topographic mapping.
In conclusion, the integration of multirotor drone technology with LiDAR systems offers a transformative solution for large-scale topographic mapping. By addressing limitations of traditional methods, such as poor vegetation penetration and high error rates, this approach ensures high precision and efficiency. The multirotor drone’s versatility in data acquisition, combined with advanced fusion and fitting algorithms, enables detailed map compilation with minimal planar errors. Future work will focus on enhancing the multirotor drone’s resilience in extreme weather, optimizing algorithms for complex structures like skyscrapers, and expanding multi-sensor integration. This research underscores the potential of multirotor drones to revolutionize geospatial surveying, providing a scalable and adaptable tool for modern mapping needs.
