Quaternion SRCKF-SLAM Algorithm for Crop Spraying Drone Attitude Estimation

In precision agriculture, crop spraying drones play a critical role in optimizing pesticide distribution and minimizing environmental impact. Accurate attitude estimation is essential for stable flight and effective spraying operations. Traditional nonlinear state models using quaternions often face challenges in numerical stability and computational complexity. This paper proposes a novel square root cubature Kalman filter based on quaternions (QSRCKF) integrated with simultaneous localization and mapping (SLAM) to enhance the attitude estimation of spraying UAVs. The algorithm addresses the normalization issues of conventional quaternions and reduces the state dimensionality and complexity of methods like the square root unscented Kalman filter (SRUKF). By leveraging SINS/SLAM combined navigation, the approach improves positioning accuracy and robustness in dynamic agricultural environments.

The attitude of a crop spraying drone is typically described using quaternions to avoid singularities associated with Euler angles. Let the local navigation frame be denoted as $ox_n y_n z_n$ (n-frame) and the body frame as $ox_b y_b z_b$ (b-frame). The rotation matrix from the n-frame to the b-frame, based on the rotation sequence $z \rightarrow x \rightarrow y$ with yaw ($\gamma$), roll ($\alpha$), and pitch ($\beta$) angles, is given by:

$$C_b^n = ABC = \begin{bmatrix}
C_{11} & C_{12} & -\cos \alpha \sin \beta \\
-\cos \alpha \sin \gamma & \cos \beta \cos \gamma & \sin \alpha \\
C_{31} & C_{32} & \cos \alpha \cos \beta
\end{bmatrix},$$

where the submatrices are defined as:

$$A = \begin{bmatrix}
\cos \beta & 0 & -\sin \beta \\
0 & 1 & 0 \\
\sin \beta & 0 & \cos \beta
\end{bmatrix}, \quad B = \begin{bmatrix}
1 & 0 & 0 \\
0 & \cos \alpha & \sin \alpha \\
0 & -\sin \alpha & \cos \alpha
\end{bmatrix}, \quad C = \begin{bmatrix}
\cos \gamma & \sin \gamma & 0 \\
-\sin \gamma & \cos \gamma & 0 \\
0 & 0 & 1
\end{bmatrix},$$

and the elements $C_{11}$, $C_{12}$, $C_{31}$, $C_{32}$ are computed as:

$$C_{11} = \cos \beta \cos \gamma – \sin \alpha \sin \beta \sin \gamma, \quad C_{12} = \cos \beta \sin \gamma + \sin \alpha \sin \beta \cos \gamma,$$
$$C_{31} = \sin \beta \cos \gamma + \sin \alpha \cos \beta \sin \gamma, \quad C_{32} = \sin \beta \sin \gamma – \sin \alpha \cos \beta \cos \gamma.$$

The attitude quaternion is defined as $\mathbf{q} = [q_1, q_2, q_3, q_4]^T$, satisfying the normalization condition $\mathbf{q}^T \mathbf{q} = q_1^2 + q_2^2 + q_3^2 + q_4^2 = 1$. The rotation matrix in terms of quaternions is expressed as:

$$C_b^n = \begin{bmatrix}
c_{11} & c_{12} & c_{13} \\
c_{21} & c_{22} & c_{23} \\
c_{31} & c_{32} & c_{33}
\end{bmatrix},$$

where the elements are derived from quaternion components:

$$c_{11} = q_1^2 + q_2^2 – q_3^2 – q_4^2, \quad c_{12} = 2(q_2 q_3 + q_1 q_4), \quad c_{13} = 2(q_2 q_4 – q_1 q_3),$$
$$c_{21} = 2(q_2 q_3 – q_1 q_4), \quad c_{22} = q_1^2 + q_3^2 – q_2^2 – q_4^2, \quad c_{23} = 2(q_1 q_2 + q_3 q_4),$$
$$c_{31} = 2(q_1 q_3 + q_4 q_2), \quad c_{32} = 2(q_3 q_4 – q_1 q_2), \quad c_{33} = q_1^2 + q_4^2 – q_2^2 – q_3^2.$$

The Euler angles are then computed from the quaternion as:

$$\alpha = \arctan 2\left[2(q_1 q_2 + q_3 q_4), \alpha_1\right], \quad \beta = \arctan\left[\frac{2(q_2 q_4 – q_1 q_3)}{\beta_1}\right], \quad \gamma = \arctan 2\left[2(q_2 q_3 – q_1 q_4), q_1^2 + q_2^2 – q_3^2 – q_4^2\right],$$

with $\alpha_1 = \sqrt{4(q_2 q_4 – q_1 q_3)^2 + (q_1^2 + q_4^2 – q_2^2 – q_3^2)^2}$ and $\beta_1 = \sqrt{4(q_1 q_2 + q_3 q_4)^2 + (q_1^2 + q_4^2 – q_2^2 – q_3^2)^2}$.

The gyroscope model for the crop spraying drone incorporates the true angular velocity $\boldsymbol{\omega}_{nb}^b$ and gyro drift $\boldsymbol{\omega}_{in}^b$. The measured angular velocity $\boldsymbol{\omega}_{ib}^b$ is given by:

$$\boldsymbol{\omega}_{ib}^b = \boldsymbol{\omega}_{nb}^b + \boldsymbol{\omega}_{in}^b = \boldsymbol{\omega}_{nb}^b + C_b^n \boldsymbol{\omega}_{in}^n,$$

where $\boldsymbol{\omega}_{in}^n$ is the gyro drift in the n-frame. The quaternion differential equation is:

$$\dot{\mathbf{q}} = \frac{1}{2} \boldsymbol{\Omega}(\boldsymbol{\omega}_{nb}^b) \mathbf{q},$$

with the skew-symmetric matrix:

$$\boldsymbol{\Omega}(\boldsymbol{\omega}_{nb}^b) = \begin{bmatrix}
0 & -\omega_x & -\omega_y & -\omega_z \\
\omega_x & 0 & \omega_z & -\omega_y \\
\omega_y & -\omega_z & 0 & \omega_x \\
\omega_z & \omega_y & -\omega_x & 0
\end{bmatrix}.$$

The discrete-time state model for the gyroscope is:

$$\mathbf{q}_k = \mathbf{q}_{k-1} + \frac{\Delta t}{2} \boldsymbol{\Omega}_{k|k-1} \mathbf{q}_{k-1},$$

where $\Delta t$ is the sampling period. The gyro output model includes bias and noise:

$$\boldsymbol{\omega}_g = \hat{\boldsymbol{\omega}} – \mathbf{b} – \boldsymbol{\xi}_a, \quad \dot{\mathbf{b}} = \boldsymbol{\xi}_b,$$

with $\boldsymbol{\xi}_a \sim \mathcal{N}(0, \sigma_a^2)$ and $\boldsymbol{\xi}_b \sim \mathcal{N}(0, \sigma_b^2)$ being Gaussian white noise.

The star sensor measurement model is:

$$\mathbf{z}_k = H(\mathbf{q}_k) \mathbf{r} + \mathbf{v}_k,$$

where $H(\mathbf{q}_k)$ is the measurement matrix, $\mathbf{r}$ is the reference vector, and $\mathbf{v}_k$ is zero-mean Gaussian noise.

The proposed QSRCKF-SLAM algorithm integrates the square root cubature Kalman filter with quaternion states. The system state vector includes attitude quaternion errors and gyro drift errors:

$$\delta \mathbf{x}_{k-1} = [\delta \mathbf{x}_{q,k-1}, \delta \mathbf{x}_{b,k-1}]^T = \mathbf{S}_{k-1} \boldsymbol{\varepsilon}_i, \quad i = 1, 2, \dots, 2n,$$

where $\mathbf{S}_{k-1}$ is the square root of the error covariance matrix. The cubature points are generated as:

$$\mathbf{x}_{i,k-1} = \mathbf{S}_{k-1} \boldsymbol{\varepsilon}_i + \hat{\mathbf{x}}_{k-1},$$

with $\boldsymbol{\varepsilon}_i = \sqrt{n} [1]_{2n}$ and equal weights $\tau_i = \frac{1}{2n}$.

The time update step involves propagating the cubature points:

$$\mathbf{x}_{i,k|k-1}^* = f(\mathbf{x}_{i,k-1}), \quad \hat{\mathbf{x}}_{k|k-1} = \sum_{i=1}^{2n} \tau_i \mathbf{x}_{i,k|k-1}^*,$$

and computing the predicted error covariance square root:

$$\mathbf{S}_{k|k-1} = \text{qr}\left[\boldsymbol{\chi}_{k|k-1}^*, \mathbf{S}_{Q,k-1}\right],$$

where $\boldsymbol{\chi}_{k|k-1}^* = \frac{1}{\sqrt{2n}} [\mathbf{x}_{1,k|k-1}^* – \hat{\mathbf{x}}_{k|k-1}, \dots, \mathbf{x}_{2n,k|k-1}^* – \hat{\mathbf{x}}_{k|k-1}]$ and $\mathbf{S}_{Q,k-1}$ is the Cholesky factor of the process noise covariance.

For the measurement update, the cubature points are:

$$\mathbf{x}_{i,k|k-1} = \mathbf{S}_{k|k-1} \boldsymbol{\varepsilon}_i + \hat{\mathbf{x}}_{k|k-1},$$

and the propagated measurements are:

$$\mathbf{z}_{i,k|k-1}^* = h(\mathbf{x}_{i,k|k-1}), \quad \hat{\mathbf{z}}_{k|k-1} = \sum_{i=1}^{2n} \tau_i \mathbf{z}_{i,k|k-1}^*.$$

The innovation covariance square root is computed as:

$$\mathbf{S}_{zz,k|k-1} = \text{qr}\left[\boldsymbol{\eta}_{k|k-1}, \mathbf{S}_{R,k}\right],$$

with $\boldsymbol{\eta}_{k|k-1} = \frac{1}{\sqrt{2n}} [\mathbf{z}_{1,k|k-1}^* – \hat{\mathbf{z}}_{k|k-1}, \dots, \mathbf{z}_{2n,k|k-1}^* – \hat{\mathbf{z}}_{k|k-1}]$ and $\mathbf{S}_{R,k}$ as the Cholesky factor of the measurement noise covariance. The Kalman gain is:

$$\mathbf{K}_k = (\mathbf{P}_{xz,k|k-1} \mathbf{S}_{zz,k|k-1}^T) \mathbf{S}_{zz,k|k-1}^{-1},$$

where the cross-covariance is $\mathbf{P}_{xz,k|k-1} = \boldsymbol{\chi}_{k|k-1} \boldsymbol{\eta}_{k|k-1}^T$. The state update and covariance square root update are:

$$\delta \hat{\mathbf{x}}_k = \mathbf{K}_k \delta \hat{\mathbf{z}}_{k|k-1}, \quad \mathbf{S}_{k|k} = \text{qr}\left[\boldsymbol{\chi}_{k|k-1} – \mathbf{K}_k \boldsymbol{\eta}_{k|k-1}, \mathbf{K}_k \mathbf{S}_{R,k}\right].$$

The quaternion and bias updates are:

$$\hat{\mathbf{q}}_k = \delta \hat{\mathbf{q}}_k \otimes \hat{\mathbf{q}}_{k|k-1}, \quad \hat{\mathbf{b}}_k = \hat{\mathbf{b}}_{k|k-1} + \delta \hat{\mathbf{b}}_k,$$

with $\delta \hat{\mathbf{q}}_k$ normalized to maintain unit quaternion constraint.

The complexity analysis of SRCDKF, SRUKF, and SRCKF algorithms is summarized in the following table, based on flops (floating-point operations) for state dimension $n$ and measurement dimension $l$:

Operation SRCDKF SRUKF SRCKF
$\mathbf{x}_{k-1}$ $3n^2$ $3n^2$ $3n^2$
$\mathbf{x}_{k|k-1}$ $4n^3 – n$ $4n^3 – n$ $4n^3 – 2n^2$
$\hat{\mathbf{x}}_{k|k-1}$ $2n^2 + 2n$ $2n^2 + 2n$ $2n^2$
$\mathbf{P}_{k|k-1}$ $4n^3$ $4n^3$ $2n^3$
$\tilde{\mathbf{x}}_{k|k-1}$ $3n^2$ $3n^2$ $3n^2$
$\mathbf{r}_{k|k-1}$ $4n^2 l – l$ $4n^2 l – l$ $4n^2 l – 2nl$
$\hat{\mathbf{z}}_{k|k-1}$ $2nl + 2l$ $2nl + 2l$ $2nl$
$\mathbf{P}_{zz,k}$ $4nl^2$ $4nl^2$ $4nl^2$
$\mathbf{P}_{xz,k}$ $2n^2 l$ $4n^2 l + 2n^2 + 2nl + 2n$ $4n^2 l + 2nl$
$\mathbf{K}_k$ $l^3 + 4nl^2 – 2nl$ $l^3 + 4nl^2 – 2nl$ $l^3 + 4nl^2 – 2nl$
$\hat{\mathbf{x}}_k$ $2nl + l$ $2nl + l$ $2nl + l$
$\mathbf{P}_k$ $2nl^2 – nl$ $2nl^2 – nl$ $2nl^2 – nl$

The overall complexities are:

$$\mathcal{O}(n^3)_{\text{SRCDKF}} = 8n^3 + 8n^2 + 6n^2 l + n + 8nl^2 + nl + l^3 + l,$$
$$\mathcal{O}(n^3)_{\text{SRUKF}} = 8n^3 + 10n^2 + 12n^2 l + 3n + 6nl^2 + 3nl + l^3 + 2l,$$
$$\mathcal{O}(n^3)_{\text{SRCKF}} = 6n^3 + 6n^2 + 8n^2 l + 10nl^2 + nl + l^3 + l.$$

This shows that SRCKF has the lowest computational complexity, making it suitable for real-time applications in crop spraying drones.

Simulations were conducted to validate the proposed algorithm for a crop spraying UAV. The environment was a $100 \, \text{m} \times 100 \, \text{m}$ area with 20 landmarks. The spraying UAV started at coordinates $(60, 10)$ and followed a clockwise trajectory. The initial conditions included attitude angle errors of zero, gyro bias variance of $\text{diag}(0.2, 0.2, 0.2)$, initial attitude error variance of $\text{diag}(18, 18, 18)$ arcseconds, process noise $\text{diag}(0.02^2 \mathbf{I}_3, 0.05^2 \mathbf{I}_3)$, and measurement noise variance $\text{diag}(20, 20, 20)$. The simulation time was 600 seconds.

The performance of QSRCKF-SLAM was compared with QSRUKF-SLAM and QSRCDKF-SLAM. The attitude estimation errors for roll, pitch, and yaw are summarized below:

Algorithm Roll Error Mean (arcsec) Pitch Error Mean (arcsec) Yaw Error Mean (arcsec) Iteration Time (s)
QSRCDKF-SLAM 6 16 24 0.024
QSRUKF-SLAM 2 8 4 0.029
QSRCKF-SLAM 2 3 1 0.012

The results demonstrate that QSRCKF-SLAM achieves the lowest errors and fastest computation, with approximately 30% improvement in accuracy over QSRUKF-SLAM. For large initial attitude errors (e.g., 30° roll, 30° pitch, 60° yaw), the proposed algorithm converged to near-zero errors within 50 seconds, showcasing robust performance for crop spraying drones in challenging conditions.

Additionally, the position estimation errors in the x, y, and z directions were evaluated over 50 Monte Carlo simulations. The QSRCKF-SLAM algorithm exhibited the smallest error ranges: approximately $[-0.1, 0.6] \, \text{m}$ in x, $[-0.2, 0.5] \, \text{m}$ in y, and $[-0.48, 0.25] \, \text{m}$ in z, with mean errors of 0.25 m, 0.05 m, and 0.05 m, respectively. This highlights the stability and precision of the spraying UAV during navigation and spraying tasks.

In conclusion, the quaternion-based SRCKF-SLAM algorithm provides a high-precision and numerically stable solution for attitude estimation in crop spraying drones. It effectively addresses the limitations of traditional methods and offers superior performance in terms of accuracy, computational efficiency, and robustness. Future work will focus on real-time implementation and field testing in diverse agricultural environments.

Scroll to Top