Facing the increasing demands of modern urban construction, traditional land surveying methods often reveal significant limitations, particularly in terms of efficiency and coverage in complex terrains or large-scale project areas. The quest for rapid and accurate acquisition of fundamental geographic information has driven the adoption of innovative technologies. Among these, low-altitude Unmanned Aerial Vehicle (UAV) drone aerial photogrammetry has emerged as a transformative tool, prized for its flexibility, high spatial resolution, and operational efficiency. This article, drawing from firsthand project experience, delves into the technical essentials and practical application of UAV drone remote sensing in engineering surveying. It outlines a replicable technical pathway through optimized flight planning, ground control, and data processing workflows, aiming to provide a comprehensive reference for similar engineering endeavors.

In a recent large-scale infrastructure project focused on landscape optimization along an urban airport expressway, the challenges were multifaceted: the need for detailed topographic data, accurate mapping for land acquisition, and volumetric calculations, all across an extensive and variably complex corridor. The project area was strategically partitioned into several primary zones (A through E), with the most topographically demanding zones further subdivided to facilitate phased and manageable data acquisition campaigns. The deployment of a fixed-wing UAV drone system was central to meeting these objectives, enabling the efficient generation of high-precision Digital Line Graphics (DLG), Digital Orthophoto Maps (DOM), and Digital Elevation Models (DEM).
The success of any UAV drone survey hinges on a robust and well-integrated system. The operational framework typically comprises three core subsystems: the Aerial Platform System, the Ground Control Station (GCS), and the Data Processing System.
- Aerial Platform System: This is the physical UAV drone and its onboard payload. For this project, a fixed-wing platform was selected for its endurance and efficiency in covering large, linear areas. The platform is integrated with an autopilot system featuring high-precision Global Navigation Satellite System (GNSS) positioning and an Inertial Measurement Unit (IMU). The primary task payload was a professional-grade multispectral camera. Critical platform and operational parameters are summarized in the table below.
| Platform Parameter | Specification | Operational Parameter | Specification |
|---|---|---|---|
| Wingspan | 2.6 m | Maximum Flight Altitude | 3500 m AMSL |
| Payload Bay Volume | 0.6 m³ | Cruise Speed | 33 m/s |
| Maximum Fuel Capacity | 2.2 L | Cruise Wind Resistance | 13 m/s |
| Maximum Payload | 2.5 kg | Camera Model | Canon 5D Mark II |
| Endurance | >2 hours | Ground Sampling Distance (GSD) | ~6.5 cm |
- Ground Control Station (GCS): The GCS is the mission command center. It consists of a handheld remote controller, a data link transceiver, and specialized software. Its functions include pre-flight mission planning (defining flight altitude, speed, and autonomous flight paths), real-time monitoring of the UAV drone’s telemetry (position, attitude, battery status), and providing emergency override controls.
- Data Processing System: This subsystem involves high-performance computing workstations and specialized photogrammetric software suites (e.g., Pixel-Grid, Agisoft Metashape, Pix4Dmapper). It is responsible for the computationally intensive tasks of aligning images, building dense point clouds, generating textured 3D models, and producing the final topographic outputs like DEMs and orthomosaics.
A critical pre-flight procedure is camera calibration. The interior orientation parameters—principal point coordinates $(x_0, y_0)$, calibrated focal length $(f)$, and lens distortion coefficients $(k_1, k_2, p_1, p_2, k_3)$—must be precisely determined. These parameters correct systematic errors inherent to the camera-lens system, ensuring geometric fidelity in the resulting 3D models. Calibration is performed using a checkerboard pattern in a controlled lab setting and validated under field conditions. The distortion model is often expressed as:
$$x_{distorted} = x(1 + k_1 r^2 + k_2 r^4 + k_3 r^6) + [2p_1xy + p_2(r^2+2x^2)]$$
$$y_{distorted} = y(1 + k_1 r^2 + k_2 r^4 + k_3 r^6) + [p_1(r^2+2y^2) + 2p_2xy]$$
where $(x, y)$ are ideal image coordinates, $r^2 = x^2 + y^2$, and $k_i$, $p_i$ are radial and tangential distortion coefficients, respectively.
The technical implementation of a UAV drone survey follows a meticulous sequence from planning to final product delivery. For the expressway project, the workflow was as follows.
3.1 Flight Planning and Mission Parameters
Detailed flight planning is paramount for achieving complete coverage and the desired accuracy. Using the GCS software, parallel flight lines were designed to cover the long, linear project area with sufficient overlap. Key parameters were calculated based on the desired Ground Sampling Distance (GSD), which is a function of sensor pixel size, focal length, and flying height $(H)$:
$$GSD = \frac{H \times \text{pixel size}}{f}$$
The flight parameters for different mission blocks are summarized below.
| Focal Length (mm) | Image Size (pixels) | Flight Altitude (m) | Longitudinal Overlap (%) | Lateral Overlap (%) |
|---|---|---|---|---|
| 40 | 5615 × 3745 | 2700 | 70 | 40 |
| 40 | 5615 × 3745 | 2700 | 70 | 40 |
| 40 | 5615 × 3745 | 1900 | 80 | 50 |
Higher overlap rates (80/50) were used in areas with complex vertical features (like buildings in settlement zones) to ensure robust 3D reconstruction, while standard rates (70/40) sufficed for open terrain.
3.2 Ground Control Point (GCP) Network Design and Surveying
The accuracy of a UAV drone photogrammetric project is fundamentally tied to a well-distributed network of high-accuracy Ground Control Points. We employed a regional network layout strategy. GCPs were strategically placed to be visible in multiple overlapping images and were evenly distributed across the project area, including the peripheries and centers of each flight block. A density of approximately 5 GCPs per 0.3 km² was targeted.
GCPs were physically marked using high-contrast targets (e.g., painted cross markers on stable pavement). Their coordinates were surveyed using GNSS Real-Time Kinematic (RTK) techniques, achieving centimeter-level accuracy. The raw WGS-84 coordinates were transformed to the local project coordinate system using a 4-parameter transformation (scale, rotation, and two translations) derived from known local control points. Each point was observed twice in fixed solution mode, with discrepancies between observations required to be less than 4 cm in both horizontal and vertical components.
3.3 Aerial Data Quality Inspection
Immediately after each UAV drone flight, a preliminary quality check was conducted. Using fast-mosaic software (e.g., Pix4D’s rapid check), the acquired images were quickly processed to generate a preliminary overview. This allowed for the visual inspection of:
- Image Quality: Checking for blur, excessive motion, or areas obscured by cloud shadows.
- Coverage and Overlap: Verifying that the planned longitudinal (minimum 60%) and lateral (minimum 30%) overlaps were achieved and that no gaps (“data voids”) existed between flight lines.
- Flight Stability: Assessing parameters like image rotation (kappa angle), which should generally be less than 5°, and route curvature.
Any deficiencies identified at this stage necessitated a re-flight of the affected area before proceeding, ensuring the integrity of the primary data.
3.4 Aerial Triangulation and Bundle Block Adjustment
This is the computational core of photogrammetry, where the geometric relationship between all images, the camera model, and the ground is solved simultaneously. Using software like Pixel-Grid, the process involves feature matching across thousands of images, followed by a rigorous bundle adjustment. The adjustment incorporates the measured GCP coordinates as constraints to orient the entire block of imagery accurately in the chosen coordinate system.
The theoretical precision of the adjusted parameters (image orientations and 3D point coordinates) is derived from the covariance matrix $\mathbf{Q}_{\mathbf{xx}}$ of the unknowns. The standard error $\sigma_i$ of an unknown $i$ is calculated as:
$$\sigma_i = \sigma_0 \sqrt{Q_{ii}}$$
where $\sigma_0$ is the a posteriori standard deviation of unit weight:
$$\sigma_0 = \sqrt{\frac{\mathbf{v}^T\mathbf{P}\mathbf{v}}{r}}$$
Here, $\mathbf{v}$ is the vector of residuals, $\mathbf{P}$ is the weight matrix, and $r$ is the degrees of freedom (number of redundant observations).
Practical accuracy is validated using independent Check Points (CPs), which were surveyed but not used in the adjustment. The discrepancies between their known coordinates and their coordinates derived from the adjusted model provide a direct measure of the final product’s accuracy. For a 1:1000 scale topographic product, common accuracy standards are shown below. In areas with dense buildings, which introduce challenges like occlusions and shadows, we applied more stringent tolerances (e.g., 0.75x the standard elevation error). The results from the UAV drone survey comfortably met these refined criteria.
| Terrain Type | Horizontal RMSE (m) | Vertical RMSE (m) | ||
|---|---|---|---|---|
| Standard | Project Target | Standard | Project Target | |
| Flat Terrain | 0.30 | 0.15 | 0.20 | 0.15 |
| Hilly Terrain | 0.50 | 0.25 | 0.40 | 0.30 |
| Mountainous Terrain | 0.70 | 0.35 | 0.60 | 0.45 |
3.5 Digital Product Generation (DEM, DOM, DLG)
Following successful aerial triangulation, the dense 3D point cloud of the terrain is generated. From this cloud, a Digital Surface Model (DSM) representing the top of all features (ground, vegetation, buildings) is created. Advanced filtering and classification algorithms are then applied to separate ground points from non-ground points (buildings, trees), yielding a bare-earth Digital Terrain Model (DTM). This DTM is interpolated into a regular grid to form the final DEM.
The DOM is produced by orthorectifying each aerial image—mathematically correcting for topographic displacement and camera tilt—using the high-resolution DEM, and then mosaicking them into a seamless, map-accurate image. The DLG, containing vector features like roads, building footprints, and contour lines, is typically extracted through a combination of automated algorithms from the DOM and 3D model, followed by meticulous manual editing and attribution. The integrated workflow for creating a realistic 3D model and its derivatives from multi-source UAV drone data is illustrated below.
Data Preprocessing: Refine POS data, optimize camera parameters, perform image color balancing.
Aerial Triangulation: Solve the bundle adjustment.
Accuracy Check: Does the AT meet specifications? If not, revisit planning or GCPs.
3D Model Production: Generate dense point cloud and mesh/texture.
Derivative Product Generation: Extract DEM (via ground point filtering), produce DOM (via orthorectification), and delineate DLG features.
The application of low-altitude UAV drone remote sensing in the expressway landscape project yielded substantial benefits. It dramatically accelerated the data acquisition phase, covering in days what would have taken weeks with traditional ground surveys, thereby significantly reducing labor costs and field exposure to risks. The high-resolution, precision outputs (DEM, DOM, DLG) provided an indispensable digital foundation for all subsequent engineering activities: detailed landscape design, accurate calculation of land acquisition areas and volumes, and precise earthwork planning. This case underscores the UAV drone’s capability as a powerful, efficient, and highly adaptable tool for modern engineering surveying.
In conclusion, the successful implementation of low-altitude UAV drone aerial photogrammetry in this complex linear project validates its technical maturity and operational superiority. Through disciplined flight planning, rigorous ground control, and robust data processing, the technology delivered products of certified accuracy that directly supported critical engineering decisions. The UAV drone has proven to be more than just a data collector; it is a platform for generating rich, actionable geospatial intelligence. As sensor technology advances (with LiDAR, hyperspectral cameras) and processing algorithms become more automated, the role of the UAV drone in engineering surveying is poised to expand further, driving the industry’s transition towards fully integrated, intelligent, and digitalized workflows.
