The pursuit of robust and precise autonomous flight for Micro Air Vehicles (MAVs), particularly quadrotor drones, in environments devoid of Global Navigation Satellite System (GNSS) signals remains a significant research challenge. Applications ranging from indoor inspection and inventory management to search and rescue in dense urban or subterranean settings demand localization solutions that are independent of external infrastructure. While laser-based systems like LiDAR offer high accuracy, their weight, power consumption, and cost often preclude integration on small, cost-effective quadrotor drone platforms. This article presents a refined control strategy for position holding, focusing on a hybrid sensor fusion approach that combines a downward-facing optical flow sensor with an ultrasonic rangefinder. The core innovation lies in actively using the ultrasonic module to maintain a constant flight altitude, thereby creating optimal and consistent working conditions for the optical flow sensor, which significantly improves the accuracy and reliability of horizontal velocity estimation. We detail the system architecture, the mathematical framework for state estimation, and present experimental results validating the effectiveness of the proposed method for enabling stable hovering in GNSS-denied environments.
Introduction and System Architecture
The fundamental requirement for stabilizing a quadrotor drone is accurate estimation of its state, including attitude, altitude, and horizontal velocity. In outdoor settings, GPS modules provide a convenient solution for position and velocity estimation. However, their susceptibility to signal occlusion and multipath effects renders them ineffective indoors or in cluttered environments. Computer vision, specifically optical flow, has emerged as a promising alternative due to its relatively low computational cost and ability to provide egomotion cues by analyzing the apparent motion of textures in the visual field. Optical flow sensors estimate velocity by calculating the displacement of visual features between consecutive image frames. A critical, often overlooked, factor affecting the fidelity of optical flow measurements is the drone’s altitude. Changes in altitude induce scale changes in the image, which the optical flow algorithm interprets as translational motion, leading to significant drift in position estimates. Our approach directly addresses this issue.
We developed a compact quadrotor drone testbed centered around the Pixhawk flight controller. The key augmentation to the standard platform is the integration of two primary sensors: a specialized optical flow module and an ultrasonic distance sensor. The optical flow module is a self-contained unit featuring a global-shutter CMOS camera with a resolution of 752 x 480 pixels and a dedicated STM32F405 microcontroller. This onboard processor handles the computationally intensive task of calculating optical flow vectors at rates up to 250 Hz, offloading this burden from the main flight controller. The ultrasonic module provides high-frequency (≈50 Hz) measurements of the distance to the ground directly beneath the drone. The system’s operational logic is straightforward yet effective: the ultrasonic sensor provides the primary signal for altitude control, and the optical flow sensor is enabled for horizontal velocity estimation only when the drone is within a predefined altitude window (e.g., below 2 meters). This not only improves optical flow accuracy by minimizing scale-change artifacts but also conserves power. The overall control system block diagram is illustrated below, and a representative image of the platform is provided.

The control loop operates as follows: The flight controller (Pixhawk) runs a cascaded PID control structure. The inner loop stabilizes attitude using data from the Inertial Measurement Unit (IMU). The outer position loop uses feedback from the ultrasonic sensor for altitude control and from the optical flow sensor for horizontal velocity control. The estimated horizontal velocities are integrated to provide position estimates, which are then used by the position controller to generate attitude setpoints for the inner loop. This seamless integration allows the quadrotor drone to maintain a fixed point in space without any GNSS input.
Mathematical Framework for Motion State Estimation
The accuracy of the hover controller is contingent on the quality of the state estimates. Our framework fuses data from the IMU, optical flow sensor, and ultrasonic rangefinder to obtain a robust estimate of the quadrotor drone’s velocity and position.
Optical Flow Processing and Refinement
The core optical flow calculation is based on the principle of brightness constancy, leading to the classical Horn-Schunck constraint equation. Let $I(x, y, t)$ be the image intensity at pixel $(x, y)$ and time $t$. The constraint assumes that the intensity of a point remains constant as it moves:
$$ I(x + \Delta x, y + \Delta y, t + \Delta t) \approx I(x, y, t) $$
Expanding this using a Taylor series and ignoring higher-order terms yields the fundamental optical flow equation:
$$ I_x u + I_y v + I_t = 0 $$
where $I_x$, $I_y$, and $I_t$ are the spatial and temporal derivatives of the image, and $u = dx/dt$, $v = dy/dt$ are the horizontal and vertical components of the optical flow vector at that point.
To manage computational load and reduce noise, the raw image is processed using a modified approach. The image is divided into a grid of sub-regions (e.g., 30×30 pixels). For each region, a feature-tracking or gradient-based method is used to compute a dominant flow vector. To prevent outliers from distant or unreliable features from corrupting the estimate, a variant of the Liang-Barsky line clipping algorithm is employed as a spatial filter. This algorithm efficiently determines which flow vectors are within a statistically valid range. Consider a flow vector $Q$ with components $(u, v)$. We define a valid range as $u_{min} \le u \le u_{max}$ and $v_{min} \le v \le v_{max}$. The Liang-Barsky algorithm parameterizes the vector and computes intersection parameters with these bounds, effectively “clipping” outliers. The remaining valid flow vectors from all regions are then averaged to produce a single, robust optical flow measurement $[u_{raw}, v_{raw}]^T$ for the entire frame.
This raw measurement is further filtered using a discrete Kalman filter to attenuate high-frequency noise from sensor vibrations and illumination changes. The state vector for this filter is the 2D optical flow velocity. The state transition and measurement models are:
$$ \mathbf{x}_k = \mathbf{A}_{k, k-1} \mathbf{x}_{k-1} + \mathbf{w}_{k-1} $$
$$ \mathbf{z}_k = \mathbf{H}_k \mathbf{x}_k + \mathbf{v}_k $$
where $\mathbf{x}_k = [u_k, v_k]^T$ is the state, $\mathbf{z}_k$ is the measurement from the optical flow processor, $\mathbf{w}_{k}$ and $\mathbf{v}_{k}$ are process and measurement noise (assumed to be zero-mean Gaussian), $\mathbf{A}$ is the state transition matrix (often set to identity for simple velocity modeling), and $\mathbf{H}$ is the measurement matrix. The Kalman filter recursively provides an optimal estimate $\hat{\mathbf{x}}_k$ of the true optical flow.
Ultrasonic Altitude Measurement and Filtering
The ultrasonic sensor measures the time-of-flight of a sound wave pulse from emission to reception after reflecting off the ground. The distance $d$ is calculated as:
$$ d = \frac{c \cdot \Delta t}{2} $$
where $c$ is the speed of sound (approximately 343 m/s at 20°C) and $\Delta t$ is the measured time interval. This measurement is highly susceptible to sporadic noise from acoustic interference, uneven ground surfaces, or airflow turbulence around the quadrotor drone.
To obtain a stable and reliable altitude estimate $h$, we again employ a Kalman filter. Here, the state vector can be chosen as position and velocity in the vertical axis: $\mathbf{x}^h_k = [h_k, \dot{h}_k]^T$. The filter dynamics model vertical motion, while the ultrasonic sensor provides a direct but noisy measurement of $h_k$. The Kalman filter effectively smooths the erratic ultrasonic data, providing a clean altitude signal $\hat{h}_k$ that is used for two critical purposes:
- Altitude Control: The filtered altitude $\hat{h}_k$ is fed into a PID controller that commands thrust to maintain a user-defined setpoint (e.g., 1.0 m).
- Optical Flow Gate: The optical flow sensor is activated only when $\hat{h}_k$ is within a suitable range (e.g., 0.5 m to 2.0 m), ensuring it operates in its optimal performance envelope and conserving power.
Data Fusion and Corrected State Estimation
A major source of error in using optical flow on a tilting quadrotor drone is that rotational motion induces apparent translational flow in the image. Even if the drone is perfectly stationary in the horizontal plane, a change in its roll or pitch angle will cause the camera to view a different part of the ground, generating optical flow. If uncorrected, this would be misinterpreted as horizontal drift. To compensate, we use the high-rate attitude estimates ($\phi$ for roll, $\theta$ for pitch) from the IMU to subtract the rotational component from the measured optical flow.
The expected optical flow $(u_e, v_e)$ induced purely by a change in attitude $(\Delta \phi, \Delta \theta)$ over one sample period is:
$$ u_e = \frac{\Delta \theta \cdot R_y}{F_\alpha}, \quad v_e = \frac{\Delta \phi \cdot R_x}{F_\beta} $$
where $R_x$ and $R_y$ are the number of pixels across the camera’s field of view in the x and y directions, and $F_\alpha$ and $F_\beta$ are the horizontal and vertical field-of-view angles in radians. The corrected, body-frame horizontal velocity estimates $(u_c, v_c)$ are then:
$$ u_c = \hat{u}_k – u_e, \quad v_c = \hat{v}_k – v_e $$
where $\hat{u}_k, \hat{v}_k$ are the filtered optical flow measurements. These corrected velocities, $(u_c, v_c)$, now represent the true translational motion of the quadrotor drone relative to the ground, independent of its attitude changes. These velocities are integrated by the flight controller’s navigation filter to estimate horizontal position $(x, y)$.
The final, complete state estimate used for control is summarized in the table below:
| State Variable | Sensor Source | Fusion/Processing Method |
|---|---|---|
| Attitude ($\phi, \theta, \psi$) | IMU (Gyroscope, Accelerometer) | Sensor Fusion (e.g., Complementary Filter, EKF) |
| Altitude ($h$) | Ultrasonic Rangefinder | Kalman Filtering |
| Body-Frame Horizontal Velocity ($u_c, v_c$) | Optical Flow Sensor & IMU | Rotational Flow Compensation & Kalman Filtering |
| World-Frame Horizontal Position ($x, y$) | $u_c, v_c$ (from above) | Discrete Integration (Dead Reckoning) |
Experimental Validation and Performance Analysis
The performance of the hybrid optical flow and ultrasonic system was evaluated through a series of autonomous hover tests conducted in an indoor flight arena. The quadrotor drone was commanded to take off, climb to 1.0 meter, and hold its position for an extended period. All GPS functionality was disabled. Flight data, including Euler angles, estimated positions, and raw sensor readings, were logged to an onboard SD card for post-flight analysis.
The primary metric for hover stability is the deviation of the vehicle from its commanded setpoint. The integrated position estimates $(x, y)$ showed significantly lower drift compared to using optical flow without active altitude control. The key to this performance is the stability of the altitude and attitude loops, which provide a stable platform for the optical flow sensor. The following table quantifies the performance under different conditions:
| Test Condition | Avg. Position Error (x,y) | Max Position Error | Altitude Std. Dev. | Roll/Pitch Std. Dev. |
|---|---|---|---|---|
| Optical Flow Only (No Altitude Hold) | > 0.5 m | > 1.5 m | ~ 0.15 m | ~ 2.5° |
| Proposed Hybrid System (Indoor) | < 0.1 m | < 0.3 m | < 0.03 m | < 1.0° |
| Proposed Hybrid System (Outdoor, Calm) | < 0.15 m | < 0.4 m | < 0.04 m | < 1.5° |
The attitude stability is crucial. As shown in the Euler angle data from a representative flight, the roll and pitch angles were maintained within a $\pm 1.5^\circ$ band, with a maximum transient deviation of under $3.0^\circ$. The yaw angle was effectively locked to its initial value. These small attitude angles minimize the magnitude of the rotational flow correction term, reducing the potential for correction errors and contributing to stable velocity estimates. The mathematical relationship between attitude variance $\sigma_{\phi,\theta}^2$ and velocity estimation error variance $\sigma_{v_{est}}^2$ can be approximated for small angles as being proportional, underscoring the importance of tight attitude control:
$$ \sigma_{v_{est}}^2 \propto \left( \frac{R}{F} \right)^2 \sigma_{\Delta\phi,\Delta\theta}^2 $$
where $\sigma_{\Delta\phi,\Delta\theta}^2$ is the variance of the attitude change between frames.
Despite the overall success, limitations were observed. The performance degraded slightly in outdoor environments with even light winds, as the quadrotor drone’s attitude controller had to work harder to reject disturbances, leading to higher-frequency attitude oscillations. Furthermore, the optical flow sensor is inherently sensitive to the visual properties of the ground. Highly uniform surfaces (e.g., a blank white floor) or moving objects in the field of view (e.g., people walking) can degrade or temporarily disrupt the flow calculation. The ultrasonic sensor also has limitations, such as a narrow beam width and potential interference from soft or absorbent ground materials.
Conclusion and Future Work
This work has successfully demonstrated a practical and effective method for achieving precise hover control for a micro quadrotor drone in GNSS-denied environments. The proposed hybrid system, which synergistically combines an ultrasonic rangefinder for altitude hold and an optical flow sensor for horizontal velocity estimation, addresses a key weakness of standalone optical flow navigation. By actively maintaining a constant altitude, the system eliminates scale-induced drift in the optical flow measurements. The implementation of Kalman filtering for both sensor streams and the compensation for rotationally induced flow results in a robust and accurate state estimator suitable for real-time control.
The advantages of this system are multifold for small quadrotor drones: it is lightweight, relatively low-cost, and power-efficient, especially with the gating mechanism for the optical flow sensor. The processing is distributed, with dedicated processors handling sensor-specific algorithms, thus conserving the computational resources of the main flight controller for core control tasks.
Future work will focus on enhancing the robustness and applicability of the system. The inherent sensitivity of optical flow to ground texture and lighting conditions remains a challenge. A promising direction is the fusion of this optical flow-ultrasonic data with other onboard sensors, such as a millimeter-wave radar or a second, forward-facing camera for visual-inertial odometry (VIO), to provide redundancy and enable navigation in more complex environments. Furthermore, replacing the linear PID controllers with more advanced nonlinear or adaptive control laws could improve the quadrotor drone’s ability to reject external disturbances like wind gusts. Investigating the use of machine learning techniques to classify and adapt to different ground textures could also make the optical flow estimation more resilient. Ultimately, the goal is to develop a fully autonomous micro quadrotor drone capable of reliable operation in any GPS-deprived scenario, from warehouses and tunnels to disaster-stricken urban areas.
