Extraction of Tea Garden Gaps from Visible UAV Imagery Under Varying Illumination Conditions

As a researcher in the field of precision agriculture, our team is constantly seeking efficient and cost-effective methods for detailed crop monitoring. In tea cultivation, accurately distinguishing the productive tea canopy from non-productive gaps (including soil, grass, and paths) is crucial for precise yield estimation, resource management, and operational planning. However, achieving high-precision extraction of these gaps in complex mountainous tea gardens, where terrain and varying solar angles create challenging illumination conditions, remains a significant problem. Traditional methods often rely on expensive multispectral or radar data, while common vegetation indices applied to standard visible-light imagery struggle with spectral confusion under different lighting. This study addresses this gap by proposing a novel spectral-difference enhancement algorithm that utilizes only standard RGB (Red, Green, Blue) imagery captured by UAV drones at different times of the day.

The core challenge lies in the changing spectral signatures of ground objects. At noon, with near-vertical solar radiation, shadows are minimal, and reflectance values are generally higher. In the afternoon, slanted sunlight creates pronounced shadows within tea rows, darkening the spectra of gap features. Our objective was to develop a robust method that could separate tea plants from gap features reliably under both conditions, using only the ubiquitous and low-cost visible-light sensors on commercial UAV drones.

Our study was conducted in a representative hilly tea garden region. The terrain features significant slopes with terraced planting patterns. We deployed a standard consumer-grade UAV equipped with an RGB camera. Flights were conducted at two distinct times: solar noon and mid-afternoon, ensuring capture of the two primary illumination regimes. The acquired overlapping images were processed using standard photogrammetry software to generate ultra-high-resolution orthomosaics (ground sampling distance ~2 cm) for both time slots. These orthomosaics were then cropped and merged into a single composite image for analysis, visually segmented by a clear illumination boundary.

The foundational step of our method was a detailed spectral profile analysis of key land cover types: tea canopy, bare soil (and paths), and grass. Using profile line tools, we extracted the digital number (DN) values across the R, G, and B bands for each target in both the noon and afternoon images. Let $b_1$, $b_2$, and $b_3$ represent the DN values for the R, G, and B bands, respectively. The analysis revealed distinct and consistent patterns.

Land Cover Type Spectral Signature (Noon) Spectral Signature (Afternoon) Key Characteristic
Tea Canopy $b_1 \approx b_2$, both $ \gg b_3$ $b_1 \approx b_2$, both $ \gg b_3$ (lower overall DN) R & G are nearly equal and significantly greater than B.
Bare Soil / Paths $b_1 > b_2 > b_3$; $(b_1-b_2) \approx (b_2-b_3)$ $b_1 > b_2 > b_3$; $(b_1-b_2) \approx (b_2-b_3)$ (lower overall DN) Monotonic decrease from R to B with roughly equal intervals.
Grass $b_2 > b_1 > b_3$; $(b_2-b_1) < (b_1-b_3)$ $b_2 > b_1 > b_3$; $(b_2-b_1) \approx (b_1-b_3)$ (lower overall DN) G is highest. At noon, R is closer to G; in afternoon, intervals are more equal.

This analysis showed that while absolute reflectance values changed with illumination, the inherent relational patterns between bands were preserved. The most critical finding was the stability of the tea canopy’s signature: the proximity of the R and G values ($b_1 \approx b_2$) and their large separation from the B value. In contrast, gap features exhibited a more ordered, stepwise decrease across the bands. However, some grass areas under afternoon shadow could superficially resemble tea due to lowered overall reflectance, though the difference $min(b_1, b_2) – b_3$ was smaller.

Based on this, we designed a Spectral Difference Enhancement (SDE) algorithm. The logic was to mathematically amplify the unique signature of tea while suppressing that of gaps. The algorithm is implemented in two sequential band calculation steps.

Step 1: Primary Enhancement (T1)
This step amplifies pixels where R and G are similar and their minimum is much larger than B.
$$ T1 = \frac{\min(b_1, b_2) – b_3}{|b_1 – b_2| + k} $$
where $k$ is a very small constant (e.g., 0.001) to prevent division by zero. For tea pixels where $b_1 \approx b_2$, the denominator $|b_1 – b_2|$ is small, making $T1$ large. For gap pixels where $|b_1 – b_2|$ is significant, $T1$ is suppressed.

Step 2: Secondary Separation (T2)
This step specifically targets gap features like grass that may have passed Step 1 by having a small $|b_1 – b_2|$, but for which $min(b_1, b_2) – b_3$ is not as large as for tea.
$$ T2 = \min(b_1, b_2) – b_3 $$
While tea has a high $T2$ value, confusing grass will have a moderately lower one.

Step 3: Final Index (T) and Thresholding
The final index is computed by multiplying $T1$ and $T2$, further widening the separability gap between the two classes.
$$ T = T1 \cdot T2 $$
Empirical analysis on our composite image determined optimal threshold values. First, $T1 > 3.725$ was used to create an initial mask. Pixels below this were set to zero. Then, for pixels within this mask, the final threshold $T > 235.882$ robustly identified tea canopy pixels. All other pixels were classified as “garden gaps”.

The performance of our method was rigorously validated. We generated 100 random sample points across the composite image and assigned ground truth labels via visual interpretation. These were compared against the classification result to build a confusion matrix.

Confusion Matrix for Tea Canopy Extraction
Classified as Gap Classified as Tea Total (Truth)
Truth: Gap 31 5 36
Truth: Tea 2 62 64
Total (Classified) 33 67 100

From this matrix, standard accuracy metrics were calculated:
$$ \text{Overall Accuracy (OA)} = \frac{31 + 62}{100} = 0.93 \text{ or } 93\% $$
$$ \text{Kappa Coefficient (κ)} = \frac{N \sum_{i=1}^{2} x_{ii} – \sum_{i=1}^{2} (x_{i+} \cdot x_{+i})}{N^2 – \sum_{i=1}^{2} (x_{i+} \cdot x_{+i})} = \frac{100 \times 93 – (33 \times 36 + 67 \times 64)}{100^2 – (33 \times 36 + 67 \times 64)} \approx 0.8453 $$
where $x_{ii}$ are the diagonal elements, $x_{i+}$ are row totals, and $x_{+i}$ are column totals from the confusion matrix. These results demonstrate a high level of accuracy in separating tea from gaps under polymorphic illumination.

In conclusion, our research presents a practical and effective solution for a persistent problem in precision tea agriculture. The proposed SDE algorithm capitalizes on the fundamental spectral properties of tea canopies observable in standard visible light. It successfully overcomes the limitations imposed by varying sun angles in mountainous terrain, a common challenge when using data from UAV drones. The method’s reliance on RGB imagery alone makes it highly accessible and cost-effective for widespread adoption, requiring only strategically timed flights. The output directly enables the accurate calculation of the actual productive (harvestable) area within a tea garden, which is vital for yield prediction, fertilizer and pesticide application planning, and general farm management efficiency. Future work will focus on testing the robustness of the algorithm across different geographical regions, tea varieties, and seasons, and potentially integrating adaptive thresholding techniques to further automate the process for large-scale applications managed by fleets of UAV drones.

Scroll to Top