In recent years, the integration of hyperspectral imaging with multirotor drone technology has revolutionized remote sensing applications, particularly in agriculture, environmental monitoring, and resource management. The ability of multirotor drones to capture high-resolution spatial and spectral data efficiently makes them indispensable for detailed earth observation. However, the IspecHyper multirotor drone hyperspectral imaging system, while advanced, faces significant challenges in data processing, including the absence of dedicated software, substantial errors in multi-strip data acquisition, missing coordinates, and the inability to perform automatic stitching. This study addresses these issues by developing a comprehensive processing methodology that leverages existing software tools to produce accurate hyperspectral reflectance products.
The research was conducted in a citrus plantation area, where the IspecHyper-VM200 imaging system, mounted on a multirotor drone, was employed to acquire high-definition photographs and multi-strip hyperspectral data. The multirotor drone platform offers flexibility in flight planning and data collection, enabling precise coverage of the study area. The hyperspectral sensor, with a spectral resolution of 2.3 nm, captures detailed spectral information across numerous bands, but the raw data often exhibit geometric distortions and lack georeferencing. To overcome these limitations, we implemented a multi-step processing chain involving PIX4D Mapper, ENVI, and ArcGIS software. This approach ensures the production of high-quality orthophotos and geometrically corrected hyperspectral images, facilitating subsequent analysis such as crop health assessment and spectral reflectance modeling.

The core of our methodology revolves around the geometric correction and mosaicking of hyperspectral data derived from the multirotor drone. Initially, high-definition photos captured by the multirotor drone are processed to generate an orthophoto, which serves as a reference for correcting the hyperspectral strips. The multirotor drone’s ability to hover and capture overlapping images is crucial for this step, as it enhances the accuracy of the photogrammetric outputs. We then apply clipping techniques to remove distorted edges from the hyperspectral strips, followed by geometric correction using the orthophoto as a baseline. The corrected strips are mosaicked to form a seamless hyperspectral image, and spectral conversion calculations are performed to derive reflectance values. This process not only mitigates the issues associated with the IspecHyper system but also provides a scalable framework for other multirotor drone-based hyperspectral applications.
In the following sections, we detail the data acquisition procedures, the step-by-step processing techniques, and the results of our experiments. We emphasize the role of the multirotor drone in ensuring data quality and discuss how our method can be adapted for various remote sensing tasks. Tables and mathematical formulations are used extensively to summarize parameters, errors, and spectral relationships, providing a clear and quantitative evaluation of the proposed approach.
Data Acquisition Using Multirotor Drones
The data acquisition phase is critical for ensuring the quality of hyperspectral imagery. We utilized a multirotor drone equipped with the IspecHyper-VM200 imaging system, which includes a high-resolution camera and a hyperspectral sensor. The multirotor drone platform was selected for its stability, maneuverability, and ability to carry heavy payloads, making it ideal for precise hyperspectral data collection. Flight planning involved determining optimal parameters such as altitude, speed, and overlap to achieve the desired spatial resolution and coverage. The multirotor drone was flown over the citrus plantation, capturing multiple strips of hyperspectral data and concurrent high-definition photos.
Key parameters for the multirotor drone data acquisition are summarized in Table 1. These parameters were optimized based on the sensor characteristics and the study area’s topography. The multirotor drone’s GPS and RTK systems were calibrated to enhance positional accuracy, although the hyperspectral data lacked inherent coordinates. The integration time of the hyperspectral sensor was adjusted dynamically using ground-based radiometric measurements to account for varying illumination conditions. This ensures that the raw data maintain consistent radiometric quality across different strips.
| Parameter | Value | Description |
|---|---|---|
| Flight Altitude | 100 m | Altitude above ground level for optimal resolution |
| Speed | 5 m/s | Ground speed of the multirotor drone |
| Overlap | 80% | Image overlap for photogrammetric processing |
| Spectral Bands | 256 | Number of bands in hyperspectral data |
| Spatial Resolution | 0.1 m | Resolution of hyperspectral imagery |
The mathematical relationship for determining the flight parameters can be expressed using the following equation, which relates the ground sampling distance (GSD) to the multirotor drone’s altitude and sensor characteristics:
$$ GSD = \frac{H \times s}{f} $$
where \( H \) is the flight altitude of the multirotor drone, \( s \) is the sensor pixel size, and \( f \) is the focal length. This equation ensures that the spatial resolution meets the requirements for detailed analysis. For our multirotor drone setup, with \( s = 5.5 \mu m \) and \( f = 20 mm \), the GSD was calculated to be approximately 0.1 m at 100 m altitude, aligning with the values in Table 1.
Processing Methodology for Hyperspectral Imagery
The processing methodology involves several stages to transform raw multirotor drone data into usable hyperspectral products. Each stage addresses specific challenges, such as geometric distortions and missing coordinates, through software-based solutions. The multirotor drone’s ability to capture overlapping images is leveraged in the initial orthophoto generation, which then guides the correction of hyperspectral strips.
Orthophoto Generation and Geometric Correction
The high-definition photos from the multirotor drone are processed using PIX4D Mapper to create an initial orthophoto. This software performs automated aerial triangulation and point cloud generation, producing a high-resolution orthomosaic. However, due to the absence of ground control points, the initial product may have positional errors. We applied geometric correction in ArcGIS using a reference image (e.g., Google Earth imagery) to align the orthophoto accurately. The correction process uses a Spline transformation, which minimizes local distortions by fitting a smooth surface through control points.
The geometric correction error is quantified using the root mean square error (RMSE) for control points. The formula for RMSE is given by:
$$ RMSE = \sqrt{\frac{1}{n} \sum_{i=1}^{n} \left( (x_i – x’_i)^2 + (y_i – y’_i)^2 \right) } $$
where \( n \) is the number of control points, \( (x_i, y_i) \) are the coordinates in the reference image, and \( (x’_i, y’_i) \) are the coordinates in the orthophoto. For our multirotor drone data, the RMSE was calculated to be 0.00068 m, indicating high precision. This corrected orthophoto, with a resolution of 0.03 m, serves as the basis for hyperspectral data alignment.
Hyperspectral Strip Clipping and Correction
The raw hyperspectral strips from the multirotor drone often exhibit edge distortions and lack georeferencing. We used ENVI software to clip the valid regions of each strip, removing areas with significant noise. The clipping process involves defining regions of interest (ROIs) based on visual inspection and spectral consistency. After clipping, each strip is geometrically corrected using the orthophoto as a reference. This step is essential for aligning the multirotor drone-acquired hyperspectral data with the spatial context.
The geometric correction for hyperspectral strips employs a similar Spline transformation, with multiple control points selected uniformly across each strip. The error for each strip is computed, and the results are summarized in Table 2. The multirotor drone’s stability during flight ensures that the strips maintain consistent geometry, reducing correction errors. The average error across all strips was 0.0058 m, which is below the spatial resolution of 0.1 m, confirming the effectiveness of our method.
| Strip ID | Total Error (m) | Rank |
|---|---|---|
| 1 | 0.0053 | 5 |
| 2 | 0.0047 | 6 |
| 3 | 0.0082 | 3 |
| 4 | 0.0075 | 4 |
| 5 | 0.0001 | 8 |
| 6 | 0.0030 | 7 |
| 7 | 0.0084 | 2 |
| 8 | 0.0095 | 1 |
The overall correction process can be modeled using an affine transformation equation, which relates the image coordinates \( (u, v) \) to the ground coordinates \( (x, y) \):
$$ \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} u \\ v \end{bmatrix} + \begin{bmatrix} e \\ f \end{bmatrix} $$
where the coefficients \( a, b, c, d, e, f \) are derived from the control points. For the multirotor drone data, this transformation ensures that each hyperspectral pixel is accurately positioned, facilitating subsequent mosaicking.
Mosaicking and Spectral Conversion
After geometric correction, the individual hyperspectral strips from the multirotor drone are mosaicked using ENVI’s Seamless Mosaic tool. This process blends the strips along their edges to create a continuous hyperspectral image. The mosaicking algorithm accounts for radiometric differences between strips, ensuring a homogeneous output. The multirotor drone’s overlapping flight patterns are crucial here, as they provide sufficient data for seamless integration.
Following mosaicking, spectral conversion is performed to compute reflectance values. The conversion uses the radiometric calibration data collected during the multirotor drone flight, including measurements from a reflectance panel. The fundamental equation for reflectance calculation is:
$$ R = \frac{DN_{\text{target}}}{DN_{\text{panel}}} \times R_{\text{panel}} $$
where \( R \) is the reflectance of the target, \( DN_{\text{target}} \) is the digital number from the hyperspectral image, \( DN_{\text{panel}} \) is the digital number from the reflectance panel, and \( R_{\text{panel}} \) is the known reflectance of the panel. This equation is applied to each pixel in the mosaicked image using ENVI’s Spectral Math functionality, resulting in a hyperspectral reflectance product that can be used for quantitative analysis.
Results and Analysis
The application of our processing method to the multirotor drone-acquired data yielded high-quality hyperspectral imagery with minimal geometric and radiometric errors. The orthophoto produced from the multirotor drone’s high-definition photos served as an accurate base for hyperspectral correction, with an RMSE of 0.00068 m. The hyperspectral strips, after clipping and correction, showed an average geometric error of 0.0058 m, which is significantly lower than the spatial resolution, validating the precision of the multirotor drone-based approach.
The mosaicked hyperspectral image covered the entire study area seamlessly, with no visible seams or misalignments. Spectral reflectance values were extracted for various features, such as citrus canopies, and compared with ground measurements to verify accuracy. The reflectance spectra exhibited consistent patterns across different regions, demonstrating the reliability of the multirotor drone system for hyperspectral remote sensing.
To further illustrate the results, we analyzed the spectral reflectance of citrus canopies at different wavelengths. The data can be summarized using a table of average reflectance values for key bands, as shown in Table 3. These values are derived from the processed hyperspectral image and highlight the multirotor drone’s capability to capture detailed spectral information.
| Wavelength (nm) | Average Reflectance | Standard Deviation |
|---|---|---|
| 450 | 0.12 | 0.02 |
| 550 | 0.25 | 0.03 |
| 650 | 0.18 | 0.02 |
| 750 | 0.35 | 0.04 |
| 850 | 0.40 | 0.05 |
The effectiveness of the multirotor drone in hyperspectral data acquisition is also evident in the ability to detect subtle spectral variations. For instance, the normalized difference vegetation index (NDVI) can be computed from the reflectance data using the formula:
$$ NDVI = \frac{R_{nir} – R_{red}}{R_{nir} + R_{red}} $$
where \( R_{nir} \) is the reflectance in the near-infrared band (e.g., 850 nm) and \( R_{red} \) is the reflectance in the red band (e.g., 650 nm). The multirotor drone’s high spatial resolution allows for detailed NDVI maps, which are useful for assessing plant health.
Conclusion
This study presents a robust methodology for processing hyperspectral imagery from the IspecHyper multirotor drone system. By integrating software tools like PIX4D Mapper, ENVI, and ArcGIS, we addressed key challenges such as geometric distortions, missing coordinates, and stitching issues. The multirotor drone platform proved instrumental in acquiring high-quality data, and our processing chain ensured the production of accurate hyperspectral reflectance products. The geometric correction errors were consistently low, with an average of 0.0058 m for hyperspectral strips, demonstrating the method’s reliability.
The use of multirotor drones for hyperspectral remote sensing offers significant advantages in terms of flexibility, cost-effectiveness, and resolution. Our approach can be adapted to various applications, including precision agriculture, environmental monitoring, and disaster management. Future work could focus on automating the processing steps further and incorporating machine learning for enhanced analysis. Overall, this research contributes to the advancement of multirotor drone-based hyperspectral imaging and provides a practical framework for researchers and practitioners.
