In recent years, the rapid advancement of Unmanned Aerial Vehicle technology has propelled its widespread application across numerous fields, such as agriculture, disaster rescue, and environmental monitoring. In these scenarios, planning an optimal path from a starting point to a target point is a fundamental and critical task for Unmanned Aerial Vehicle operations. However, in practical applications, Unmanned Aerial Vehicle path planning must consider multiple factors, including terrain obstacles, flight restrictions, and energy consumption. Addressing these complex environments and multi-objective tasks makes planning a safe and efficient path particularly important.
Current commonly used path planning algorithms primarily include traditional methods, reinforcement learning-based approaches, and heuristic algorithms. Traditional algorithms like the A* algorithm, RRT algorithm, and Dijkstra algorithm are typically based on mathematical models and graph theory methods, offering advantages such as solid theoretical foundations, determinism, and ease of implementation. They perform well in small-scale environments but exhibit poorer performance and lower computational efficiency in complex, high-dimensional environments. For instance, an improved lazy theta* path planning algorithm based on the theta* variant of A* has been proposed, which reduces search nodes by incorporating octree mapping and introduces line-of-sight algorithms and heuristic weight adjustments to effectively enhance search speed and path smoothness. However, the octree map construction time is lengthy, potentially affecting real-time performance, and its ability to handle complex obstacles is limited. Another example is an enhanced kinematically constrained bidirectional RRT algorithm that improves the generation of guided trees via artificial potential fields, motion control parameter sampling, and optimization strategies for dual-tree connection regions, achieving efficient path planning and smooth trajectory generation in complex orchard environments. Nevertheless, its real-time response capability in complex environments is limited, and it cannot effectively address obstacle avoidance for intricate obstacles. A method combining improved information-enhanced RRT* with dynamic windows has been proposed to solve path planning problems for Unmanned Aerial Vehicles in dynamic or unknown static obstacle environments, but this algorithm has high complexity and demands substantial computational resources.
Reinforcement learning methods, utilizing deep Q-networks, Q-learning, etc., for path planning, offer strong adaptability, real-time planning capability, and autonomous learning ability, enabling path planning in complex, dynamic, and uncertain environments. However, they require extensive training data and computational resources, and the training process is often time-consuming. For example, an improved adversarial dual deep Q-network, combined with heuristic action strategies and prioritized experience replay, successfully addresses path planning in complex scenarios but demands high computational resources, with deep neural networks requiring large amounts of training data and computational power, accompanied by long model convergence times. A Q-learning algorithm that incorporates a shortest-distance priority strategy during learning slightly reduces the distance traveled by the Unmanned Aerial Vehicle to reach the target point, effectively solving path planning in static and dynamic obstacle environments. However, Q-learning requires maintaining and updating a Q-table, and when the state space is large, computational complexity increases significantly; moreover, as the environment becomes more complex and changes frequently, training time also substantially increases.
Heuristic algorithms in path planning offer advantages such as high flexibility, strong adaptability, and prominent multi-objective optimization capabilities, effectively addressing various types of path planning problems. However, they face issues with convergence speed and global optimal solutions. For instance, a novel hybrid particle swarm optimization algorithm integrates simulated annealing and dimension learning strategies to enhance optimization capability, avoid local convergence, and improve convergence speed. Yet, in large-scale search spaces or high-dimensional optimization problems, it still suffers from insufficient population diversity. A hybrid grey wolf and differential evolution algorithm has been proposed to solve Unmanned Aerial Vehicle path planning problems, expanding the wolf pack’s position update equation and using a rank-based mutation operator based on differential evolution to promote algorithm exploration and help escape local optima. However, this algorithm experiences convergence lag. Another approach based on partially observable Markov decision processes and an improved grey wolf optimization algorithm enables Unmanned Aerial Vehicle path planning and collision avoidance, generating an optimal flight path in static environments while achieving dynamic obstacle avoidance. However, this algorithm demands high computational resources, as Monte Carlo tree search and information particle filters involve substantial computation in continuous state spaces.
The White Shark Optimizer (WSO) is a novel heuristic algorithm proposed in 2022 for global optimization problems, capable of simply and quickly handling different types of issues. However, it tends to suffer from insufficient exploration of the search space and easily falls into local optima. To address these issues, this paper proposes a Multi-Strategy Improved White Shark Optimizer (MSIWSO) applied to Unmanned Aerial Vehicle three-dimensional path planning. It incorporates a spiral search strategy to balance global exploration and local exploitation, enhancing search efficiency; employs a crisscross strategy to optimize solution distribution, improving global search capability and solution accuracy; and introduces a Cauchy mutation strategy to expand the search space, increase population diversity, and avoid local optima. Experiments demonstrate that this algorithm exhibits excellent optimization capability and stability, enabling the planning of safe and efficient flight paths for Unmanned Aerial Vehicles.

This study focuses on scenarios such as single Unmanned Aerial Vehicle monitoring and rescue in complex mountainous environments, addressing how to plan a safe and efficient flight path from a starting point to a target point. As shown in the figure above, path planning must automatically generate an optimal path from a fixed starting point to safely reach the target point, fully considering terrain environment and flight constraints (such as yaw angle, pitch angle, etc.), to ensure efficient task completion. The evaluation of each path must integrate three-dimensional environmental factors and flight physical constraints, which determine the path’s feasibility, safety, and efficiency. The following sections detail the Unmanned Aerial Vehicle path planning model.
The Unmanned Aerial Vehicle path planning involves finding an optimal and collision-free trajectory in the solution space from a specified starting point to a target point, while considering environmental threats (e.g., terrain obstacles) and flight physical constraints. A feasible path P can be represented as an ordered series of path points: $$P = \{p_0, p_1, \ldots, p_D, p_{D+1}\}$$, where $p_0$ is the starting point, $p_{D+1}$ is the target point, and each path point $p_i = (x_i, y_i, z_i)$ must satisfy boundary constraints. The horizontal boundaries are defined by $[X_{\text{min}}, X_{\text{max}}]$ and $[Y_{\text{min}}, Y_{\text{max}}]$, and the vertical boundaries by $[Z_{\text{min}}, Z_{\text{max}}]$. The specific symbol definitions are provided in Table 1.
| Symbol | Definition |
|---|---|
| $D$ | Number of path points |
| $p_0$, $p_{D+1}$ | Starting and target points |
| $\alpha_i$, $\beta_i$ | Yaw and pitch angles |
| $\alpha_{\text{max}}$, $\beta_{\text{max}}$ | Maximum yaw and pitch angles |
| $H_{\text{min}}$, $H_{\text{max}}$ | Minimum and maximum heights above terrain |
| $\lambda_l$ | Length penalty coefficient |
| $\lambda_h$ | Height penalty coefficient |
| $\lambda_\alpha$, $\lambda_\beta$ | Yaw and pitch angle penalty coefficients |
| $K$ | Maximum iterations |
| $N$ | Population size |
| $M$ | Number of crisscross individuals |
| $v_k^i$ | Current white shark velocity |
| $\omega_k^i$ | Current white shark position |
| $\omega_k^{\text{best}}$ | Global best white shark position |
| $\hat{v}_k^i$ | Individual best position of white shark |
| $\omega_k^r$ | Random white shark position in population |
| $D_w$ | Distance between best and current white shark |
| $D_r$ | Distance between current and random white shark |
The environmental modeling for Unmanned Aerial Vehicle flight is depicted in the figure above, where mountains are defined as obstacles that severely impact Unmanned Aerial Vehicle flight safety. Environmental modeling in path planning is foundational, describing the conditions under which an Unmanned Aerial Vehicle must avoid obstacles and satisfy constraints while flying in a specific space. The mathematical model of obstacles can be expressed as:
$$z(x, y) = \sum_{i=1}^{m} h(i) \times \exp\left(-\left(\frac{x – x_0(i)}{s_x(i)}\right)^2 – \left(\frac{y – y_0(i)}{s_y(i)}\right)^2\right)$$
This formula constructs a three-dimensional mountainous environment model by superimposing multiple rotationally symmetric Gaussian surfaces to generate continuous terrain. Here, $(x, y)$ is the projection of a point on the horizontal plane, $z$ is the corresponding height at that point; $(x_0(i), y_0(i))$ represents the center coordinates of the $i$-th peak; $s_x(i)$ and $s_y(i)$ control the decay of the $i$-th peak along the x-axis and y-axis, respectively; $m$ is the number of peaks; and $h(i)$ is the height of the $i$-th peak. The exponential function simulates the slope change of the peaks, with height衰减 becoming more significant farther from the peak center. In practical modeling, independent peak forms are generated separately and then superimposed to simulate complex natural landscapes with multiple peaks.
In Unmanned Aerial Vehicle path planning, to ensure that the Unmanned Aerial Vehicle avoids terrain obstacles during flight while preventing increased energy consumption or exposure to adverse conditions like high-altitude winds due to excessive flight height, a reasonable range for flight altitude is planned as follows:
$$z(x_i, y_i) + H_{\text{min}} \leq H_i \leq z(x_i, y_i) + H_{\text{max}}$$
where $z(x_i, y_i)$ is the terrain height at the $i$-th path point; $H_i$ is the Unmanned Aerial Vehicle height at the $i$-th path point; and $H_{\text{min}}$ and $H_{\text{max}}$ are the minimum and maximum allowed flight heights above the terrain, respectively.
In actual flight, the Unmanned Aerial Vehicle’s trajectory should be as smooth and stable as possible. Excessively large yaw and pitch angles pose unnecessary risks to the Unmanned Aerial Vehicle; therefore, these angles should be minimized.
$$\alpha_i = \arccos\left(\frac{\mathbf{q}_i \cdot \mathbf{q}_{i+1}}{|\mathbf{q}_i| \cdot |\mathbf{q}_{i+1}|}\right) < \alpha_{\text{max}}$$
$$\beta_i = \arctan\left(\frac{z_{i+1} – z_i}{\sqrt{(x_{i+1} – x_i)^2 + (y_{i+1} – y_i)^2}}\right) < \beta_{\text{max}}$$
where $\mathbf{q}_i = (x_i – x_{i-1}, y_i – y_{i-1})$ and $\mathbf{q}_{i+1} = (x_{i+1} – x_i, y_{i+1} – y_i)$ are direction vectors at the $i$-th path point.
The quality of paths generated by Unmanned Aerial Vehicle path planning is typically evaluated by a fitness function that综合考虑 path length, path height, and Unmanned Aerial Vehicle flight physical constraints. The fitness function $F$ is expressed as:
$$F = \lambda_l F_L + \lambda_h F_H + \lambda_\alpha F_\alpha + \lambda_\beta F_\beta$$
where $F_L$, $F_H$, $F_\alpha$, and $F_\beta$ represent path length penalty, path height penalty, yaw angle penalty, and pitch angle penalty, respectively. $\lambda_l$ is the path length penalty coefficient; $\lambda_h$ is the height penalty coefficient; and $\lambda_\alpha$ and $\lambda_\beta$ are the yaw and pitch angle penalty coefficients.
In Unmanned Aerial Vehicle path planning, to ensure paths meet actual task requirements while minimizing path length under basic constraints, the flight path consists of $D+1$ path segments, with the length of each segment $l_i$ being the Euclidean distance between adjacent nodes. The path length penalty $F_L$ is given by:
$$F_L = \frac{\sum_{i=1}^{D+1} l_i – L_{\text{min}}}{L_{\text{max}} – L_{\text{min}}}$$
where $L_{\text{min}}$ is the Euclidean distance from start to target point, and $L_{\text{max}}$ is the theoretical maximum path length, defined as the Unmanned Aerial Vehicle vertically ascending to the maximum safe flight height, flying straight to above the target point, and then vertically descending.
Considering potential constraint violations, a penalty mechanism is adopted, with specific penalty formulas as follows:
$$F_H = \sum_{i=1}^{D} \begin{cases}
0 & \text{if } H_{\text{min}} \leq H_i – z(x_i, y_i) \leq H_{\text{max}} \\
\frac{(H_i – z(x_i, y_i) – H_{\text{max}})(F_H^{\text{max}} – F_H^{\text{min}})}{H_{\text{max}} – H_{\text{min}}} + F_H^{\text{min}} & \text{if } H_i – z(x_i, y_i) > H_{\text{max}} \\
\frac{(H_{\text{min}} – (H_i – z(x_i, y_i)))(F_H^{\text{max}} – F_H^{\text{min}})}{H_{\text{max}} – H_{\text{min}}} + F_H^{\text{min}} & \text{if } H_i – z(x_i, y_i) < H_{\text{min}}
\end{cases}$$
$$F_\alpha = \sum_{i=1}^{D} \begin{cases}
0 & \text{if } \alpha_i \leq \alpha_{\text{max}} \\
\frac{(\alpha_i – \alpha_{\text{max}})(F_\alpha^{\text{max}} – F_\alpha^{\text{min}})}{\alpha_{\text{max}} – \alpha_{\text{min}}} + F_\alpha^{\text{min}} & \text{if } \alpha_i > \alpha_{\text{max}}
\end{cases}$$
$$F_\beta = \sum_{i=1}^{D} \begin{cases}
0 & \text{if } \beta_i \leq \beta_{\text{max}} \\
\frac{(\beta_i – \beta_{\text{max}})(F_\beta^{\text{max}} – F_\beta^{\text{min}})}{\beta_{\text{max}} – \beta_{\text{min}}} + F_\beta^{\text{min}} & \text{if } \beta_i > \beta_{\text{max}}
\end{cases}$$
where $F_H^{\text{max}}$, $F_H^{\text{min}}$ are the maximum and minimum penalties for violating maximum safe height constraints; $F_\alpha^{\text{max}}$, $F_\alpha^{\text{min}}$ for yaw angle constraints; and $F_\beta^{\text{max}}$, $F_\beta^{\text{min}}$ for pitch angle constraints.
The White Shark Optimizer (WSO) is inspired by the hunting behavior of white sharks, which use their keen hearing and smell to accurately locate prey, enabling search, tracking, positioning, and hunting. WSO maps prey positions to the solutions being sought, optimizing solutions by simulating the process of white sharks sensing and tracking prey. The algorithm mimics three core predation phases: (1) moving towards prey, (2) randomly searching for prey, and (3) searching near prey.
In moving towards prey, white sharks track and locate prey based on their senses. They perceive prey positions through waves generated by prey movement and move directly towards the prey. The velocity update is:
$$v_{k+1}^i = \mu \left[ v_k^i + p_1 c_1 (\omega_k^{\text{gbest}} – \omega_k^i) + p_2 c_2 (\hat{v}_k^i – \omega_k^i) \right]$$
where $\hat{v}_k^i$ is the index vector for the individual best position, defined as $\hat{v} = \lfloor N \cdot \text{rand}(1, N) \rfloor + 1$.
In randomly searching for prey, white sharks search based on scent and movement waves left by prey. The position update is:
$$\omega_{k+1}^i = \begin{cases}
\omega_k^i \cdot \neg \mathbf{a} + \mathbf{b} \cdot \mathbf{u} + (\mathbf{l} – \mathbf{u}) \cdot \text{rand} & \text{if } \text{rand} < mv \\
\omega_k^i + v_k^i / f & \text{if } \text{rand} \geq mv
\end{cases}$$
where $\mathbf{a}$, $\mathbf{b}$ are binary vectors, $\mathbf{u}$, $\mathbf{l}$ are upper and lower bounds, $mv$ is sensory strength, and $f$ is wave frequency.
In searching near prey, white sharks update their positions based on the best shark, bringing them closer to prey:
$$\omega_{k+1}^i = \omega_k^{\text{gbest}} + \text{sgn}(\text{rand} – 0.5) D_w \cdot r_1 r_2 r_3 < S_s$$
where $D_w = \text{rand} \cdot (\omega_k^{\text{gbest}} – \omega_k^i)$, and $S_s$ is sensory strength.
Additionally, positions are updated based on the top two best positions:
$$\omega_{k+1}^i = \frac{\omega_{k+1}^i + \omega_k^i}{2 \cdot \text{rand}}$$
WSO has advantages like few parameters, ease of implementation, and fast convergence, but it suffers from insufficient global search capability, tendency to local optima, and low convergence accuracy. To address these, three collaborative improvement strategies are proposed.
The spiral search strategy is introduced to overcome limited exploration in random search. It dynamically adjusts spiral trajectories: expanding exploration range early for global search and contracting later for local optimization. The improved random search is:
$$\omega_{k+1}^i = \begin{cases}
\omega_k^i + e^{\psi \tau} \cos(2\pi \tau) D_r & \text{if } \text{rand} < mv \\
\omega_k^i + v_k^i / f & \text{if } \text{rand} \geq mv
\end{cases}$$
where $D_r = \text{rand} \cdot (\omega_k^r – \omega_k^i)$, $\psi=1$, $\tau$ is random in $[-1,1]$. This enhances exploration-exploitation balance.
The crisscross strategy addresses premature convergence and insufficient search. Horizontal crossover reduces unexplored regions, improving global search, while vertical crossover breaks dimensional stagnation. For $M$ randomly selected sharks, horizontal crossover between parents $\omega_{i1}$ and $\omega_{i2}$ produces offspring:
$$\omega_{i1,j}^{\text{hc}} = r_4 \omega_{i1,j} + (1 – r_4) \omega_{i2,j} + c_3 (\omega_{i1,j} – \omega_{i2,j})$$
$$\omega_{i2,j}^{\text{hc}} = r_5 \omega_{i2,j} + (1 – r_5) \omega_{i1,j} + c_4 (\omega_{i2,j} – \omega_{i1,j})$$
where $r_4, r_5 \in [0,1]$, $c_3, c_4 \in (-1,1)$. Vertical crossover on dimensions $j1$ and $j2$ for individual $\omega_i$:
$$\omega_{i,j1}^{\text{vc}} = r_6 \omega_{i,j1} + (1 – r_6) \omega_{i,j2}$$
with $r_6 \in [0,1]$. Offspring compete with parents, retaining better individuals.
The Cauchy mutation strategy uses Cauchy distribution’s heavy tails to generate extreme values, enhancing global search by perturbing the current solution. The global best update is:
$$\omega_{\text{gbest}}’ = \omega_{\text{gbest}} + c \cdot \delta \cdot \text{Cauchy}(0,1) \quad \text{if } \text{rand} < \delta$$
where $c = \text{sgn}(\text{rand} – 0.5)$, $\delta = \frac{b_0}{1 + e^{b_1 k / K}}$, with $b_0=0.75$, $b_1=0.1$ controlling initial value and decay. This maintains diversity and avoids local optima.
The MSIWSO algorithm is summarized as follows:
Algorithm 1: Multi-Strategy Improved White Shark Optimizer
Input: Population size $N$, iterations $K$
Output: Optimal solution
1. Initialize population $\omega_i$ for $i=1$ to $N$.
2. Compute fitness, set best as $\omega_{\text{gbest}}$.
3. While $k < K$:
a. Update parameters $mv$, $S_s$, $\delta$.
b. For each shark, update position using spiral search (Eq. above).
c. For each shark, if rand $< S_s$, update locally (Eq. above) and via fish behavior (Eq. above).
d. Randomly select $M$ sharks, apply crisscross strategy.
e. Compute fitness, update $\omega_{\text{gbest}}$.
f. If rand $< \delta$, apply Cauchy mutation and greedy selection.
g. $k = k + 1$.
4. Return optimal solution.
In Unmanned Aerial Vehicle path planning, flight paths are typically generated as series of discrete nodes connected to form a complete path. Applying MSIWSO, each path is represented by a white shark with $D$-dimensional position vector, where each dimension corresponds to a path point. During initialization, $N$ initial paths are randomly generated, each with $D$ three-dimensional points, excluding invalid paths violating constraints. MSIWSO optimizes these paths by evaluating fitness and adjusting node positions via the strategies. After convergence, cubic spline interpolation smooths the discrete nodes into a continuous trajectory for stable flight.
Steps for path planning:
1. Model 3D mountainous environment, set start and end points.
2. Define multi-objective fitness function (Eq. above).
3. Generate $N$ initial paths, optimize with MSIWSO to converge to an optimal trajectory.
4. Apply cubic spline interpolation for smooth flight path.
For cubic spline interpolation, let the optimal path points be $P = \{p_0, p_1, \ldots, p_D, p_{D+1}\}$ with $p_i = (x_i, y_i, z_i)$. Separate into sequences $A = \{x_0, \ldots, x_{D+1}\}$, $B = \{y_0, \ldots, y_{D+1}\}$, $C = \{z_0, \ldots, z_{D+1}\}$. For each dimension, construct cubic spline functions ensuring continuity in value, first, and second derivatives. Compute smooth points $(x^*, y^*, z^*)$ over parameterized intervals, combine with start and end points to form final smooth path $P^* = \{p_0, p_1^*, \ldots, p_n^*, p_{D+1}\}$.
To validate MSIWSO, experiments use CEC2022 test functions and Unmanned Aerial Vehicle path planning simulations. The environment: Win11 OS, 16GB RAM, AMD Ryzen 9 7945HX processor, MATLAB R2021a. CEC2022 has 12 single-objective test functions: unimodal (F1), multimodal (F2-F5), hybrid (F6-F8), and composition (F9-F12). Algorithms compared: MSIWSO, HGWODE, WSO. Population size 60, dimension 20, 20 independent runs per function. Results in Table 2 show MSIWSO outperforms in most functions, with lower mean and std, indicating better optimization and stability.
| Function | Metric | WSO | HGWODE | MSIWSO |
|---|---|---|---|---|
| F1 | Mean | 6.6063E+3 | 5.1557E+4 | 0.3693E-1 |
| Std | 1.2846E+4 | 1.7787E+4 | 1.0519E+0 | |
| Best | 7.6375E+1 | 2.1072E+4 | 5.6101E-3 | |
| Worst | 5.5527E+4 | 9.1184E+4 | 4.7583E+0 | |
| F2 | Mean | 7.1111E+1 | 7.3370E+1 | 3.9969E+1 |
| Std | 4.7849E+1 | 4.5124E+1 | 2.0541E+1 | |
| Best | 6.7036E+0 | 4.4895E+1 | 3.5191E-1 | |
| Worst | 1.9018E+2 | 2.0258E+2 | 6.8497E+1 | |
| F3 | Mean | 1.1212E+1 | 3.4376E+0 | 5.8599E-2 |
| Std | 6.9474E+0 | 2.9394E+0 | 3.6817E-2 | |
| Best | 1.1239E+0 | 1.5057E-1 | 9.7804E-3 | |
| Worst | 2.6268E+1 | 1.2497E+1 | 1.4413E-1 | |
| F4 | Mean | 4.6237E+1 | 9.2060E+1 | 3.6512E+1 |
| Std | 1.8113E+1 | 1.8285E+1 | 1.1712E+1 | |
| Best | 1.9908E+1 | 6.3145E+1 | 1.7909E+1 | |
| Worst | 9.4949E+1 | 1.2211E+2 | 5.7707E+1 | |
| F5 | Mean | 2.1281E+2 | 1.9559E+1 | 2.6477E+1 |
| Std | 1.7308E+2 | 6.5643E+1 | 3.2081E+1 | |
| Best | 2.2926E+1 | 6.0019E-1 | 1.1773E+0 | |
| Worst | 6.5081E+2 | 2.9661E+2 | 1.3115E+2 | |
| F6 | Mean | 3.6311E+6 | 4.5594E+7 | 1.5354E+4 |
| Std | 1.1370E+7 | 6.7329E+7 | 7.0112E+3 | |
| Best | 1.2954E+2 | 1.8659E+4 | 1.2708E+3 | |
| Worst | 4.3117E+7 | 3.2445E+8 | 2.3274E+4 | |
| F7 | Mean | 1.2885E+2 | 1.4006E+2 | 4.9540E+1 |
| Std | 7.7938E+1 | 5.1615E+1 | 3.3091E+1 | |
| Best | 2.8913E+1 | 4.0574E+1 | 2.4611E+1 | |
| Worst | 2.8562E+2 | 2.5550E+2 | 1.6948E+2 | |
| F8 | Mean | 1.1242E+2 | 3.1875E+1 | 5.9004E+1 |
| Std | 7.1440E+1 | 7.5534E+0 | 5.6606E+1 | |
| Best | 2.1882E+1 | 2.3716E+1 | 2.1019E+1 | |
| Worst | 2.8790E+2 | 4.9529E+1 | 1.6554E+2 | |
| F9 | Mean | 2.4072E+2 | 1.9963E+2 | 1.8078E+2 |
| Std | 6.3157E+1 | 2.4360E+1 | 5.3886E-4 | |
| Best | 1.8078E+2 | 1.8078E+2 | 1.8078E+2 | |
| Worst | 3.8076E+2 | 2.8008E+2 | 1.8078E+2 | |
| F10 | Mean | 1.8345E+3 | 4.7949E+2 | 2.2791E+2 |
| Std | 1.2359E+3 | 3.9170E+2 | 1.7296E+2 | |
| Best | 1.0096E+2 | 1.2632E+2 | 4.0477E+1 | |
| Worst | 3.3135E+3 | 1.4766E+3 | 5.7842E+2 | |
| F11 | Mean | 4.4462E+2 | 3.0748E+2 | 3.0112E+2 |
| Std | 1.1483E+2 | 1.0016E+1 | 4.6634E+0 | |
| Best | 3.0678E+2 | 3.0086E+2 | 3.0002E+2 | |
| Worst | 6.5283E+2 | 3.3946E+2 | 3.2093E+2 | |
| F12 | Mean | 3.6492E+2 | 2.5682E+2 | 2.5313E+1 |
| Std | 7.9126E+1 | 1.6898E+1 | 9.5241E+0 | |
| Best | 2.4954E+2 | 2.4449E+2 | 2.4125E+2 | |
| Worst | 5.0071E+2 | 3.0211E+2 | 2.7734E+2 |
Convergence curves on these functions show MSIWSO achieves faster convergence and better solutions, demonstrating superior exploration and exploitation. Thus, MSIWSO exhibits excellent optimization capability and stability for complex problems.
For Unmanned Aerial Vehicle path planning, simulations in 10-peak and 20-peak terrain models compare MSIWSO with WSO and HGWODE. Parameters: population size $N=60$, path points $D=5$, iterations $K=500$, crisscross individuals $M=15$, start point $(0,0,0)$, target $(100,100,80)$, $\alpha_{\text{max}}=30^\circ$, $\beta_{\text{max}}=30^\circ$, $H_{\text{min}}=5$, $H_{\text{max}}=20$. Penalty coefficients are varied in groups G1-G5 to analyze impact. Results in Table 3 show MSIWSO consistently outperforms in fitness and sub-metrics across coefficient sets, adapting well to different needs.
| Group | $\lambda_l$ | $\lambda_h$ | $\lambda_\alpha$ | $\lambda_\beta$ | Algorithm | Fitness | Length Penalty | Height Penalty | Yaw Penalty | Pitch Penalty |
|---|---|---|---|---|---|---|---|---|---|---|
| G1 | 0.25 | 0.25 | 0.25 | 0.25 | WSO | 5.931e-1 | 4.688e-1 | 7.792e-2 | 3.552e-2 | 1.087e-2 |
| HGWODE | 1.998e-1 | 1.561e-1 | 2.062e-2 | 2.224e-2 | 8.851e-4 | |||||
| MSIWSO | 1.750e-1 | 1.499e-1 | 2.171e-4 | 2.161e-2 | 3.221e-3 | |||||
| G2 | 0.4 | 0.2 | 0.2 | 0.2 | WSO | 3.088e-1 | 2.099e-1 | 6.947e-2 | 2.705e-2 | 2.409e-3 |
| HGWODE | 3.653e-1 | 3.119e-1 | 3.656e-2 | 1.573e-2 | 1.171e-3 | |||||
| MSIWSO | 1.940e-1 | 1.402e-1 | 2.767e-2 | 2.554e-2 | 5.681e-4 | |||||
| G3 | 0.2 | 0.4 | 0.2 | 0.2 | WSO | 3.543e-1 | 2.877e-1 | 3.694e-2 | 2.844e-2 | 1.180e-3 |
| HGWODE | 1.994e-1 | 1.831e-1 | 7.265e-4 | 1.464e-2 | 8.337e-4 | |||||
| MSIWSO | 1.227e-1 | 1.053e-1 | 1.332e-5 | 1.508e-2 | 2.299e-3 | |||||
| G4 | 0.2 | 0.2 | 0.4 | 0.2 | WSO | 3.049e-1 | 2.448e-1 | 5.118e-2 | 3.535e-3 | 5.434e-3 |
| HGWODE | 1.959e-1 | 1.540e-1 | 2.695e-2 | 1.035e-2 | 4.616e-3 | |||||
| MSIWSO | 1.792e-1 | 1.668e-1 | 6.308e-3 | 3.407e-3 | 2.529e-3 | |||||
| G5 | 0.2 | 0.2 | 0.2 | 0.4 | WSO | 3.385e-1 | 2.586e-1 | 5.018e-2 | 2.332e-2 | 6.394e-3 |
| HGWODE | 1.194e-1 | 9.725e-2 | 6.958e-3 | 1.495e-2 | 2.061e-4 | |||||
| MSIWSO | 1.024e-1 | 8.512e-2 | 2.651e-5 | 1.726e-2 | 4.979e-5 |
Strategy validity is verified by testing variants: MSI-S (spiral only), MSI-C (crisscross only), MSI-M (mutation only). Results show each strategy contributes uniquely: MSI-S balances exploration-exploitation, MSI-C improves global search and accuracy, MSI-M enhances diversity. Combined MSIWSO outperforms all, confirming synergy.
In path planning simulations, MSIWSO generates paths that avoid obstacles by lateral detours rather than climbing, ensuring safety and efficiency. Convergence curves indicate MSIWSO stabilizes after 200 iterations, with better fitness. Box plots of fitness and path length in 10-peak and 20-peak environments (Figures not shown) show MSIWSO has lower mean and median, shorter boxes and whiskers, indicating superior stability and consistency. In 10-peak, average fitness reduces by 40.35% vs WSO and 68.64% vs HGWODE; in 20-peak, by 72.2% and 34.7%. Path length in 10-peak reduces by 1.16% and 2.78%; in 20-peak, by 11.2% and 2.7%. Height, yaw, and pitch penalties are lower, demonstrating smoother and stabler paths for Unmanned Aerial Vehicles.
In conclusion, the Multi-Strategy Improved White Shark Optimizer effectively solves Unmanned Aerial Vehicle path planning in 3D space. The spiral search balances exploration and exploitation, crisscross optimizes solution distribution, and Cauchy mutation avoids local optima. Experiments confirm MSIWSO’s excellent optimization and stability, planning safe, efficient paths for Unmanned Aerial Vehicles like JUYE UAV. Future work will focus on dynamic environments for real-time obstacle avoidance, enhancing autonomy in unknown settings.
