In recent years, the rapid advancement of fifth-generation mobile communication technology and the Internet of Things has led to widespread adoption of IoT devices across various industries, significantly enhancing our daily lives. However, terminal devices face inherent limitations in computational power, storage capacity, and battery life, making it challenging to meet user demands for low latency, energy efficiency, and reliability when handling compute-intensive and latency-sensitive tasks. This has driven the exploration of novel computing architectures. Mobile Cloud Computing addresses this by offloading tasks to remote cloud data centers, leveraging powerful computational clusters and elastic storage resources to process massive computational requests. However, this approach suffers from significant drawbacks: terminals must exchange data with distant cloud servers via wide-area networks, introducing additional communication delays and potential packet loss, especially in dynamic network environments where quality of service for latency-sensitive tasks is hard to guarantee.
To overcome these issues, Mobile Edge Computing emerged as an innovative architecture. Proposed by the European Telecommunications Standards Institute in 2014, MEC deploys computational nodes at the network edge, positioning servers closer to base stations. This distributed framework reduces the spatial distance between computational resources and terminal devices to a single hop, offering dual technical advantages: significantly lower end-to-end transmission delays and reduced backhaul traffic, optimizing overall network energy efficiency. Despite these benefits, MEC faces new challenges. Computational resources at base stations are often pre-configured, while user demands fluctuate in real-time. Sudden traffic surges from social events can cause system overload, leading to exponential increases in task response delays. One solution involves migrating computational tasks between edge servers to balance loads, but the dispersed nature of MEC servers and limited resources make designing effective load balancing strategies complex.

Another promising approach leverages drone technology, specifically Unmanned Aerial Vehicles, for their flexibility and rapid deployment capabilities. By equipping UAVs with small MEC servers, they can act as mobile edge computing nodes or aerial relays, dynamically supplementing computational resources. This paper proposes a multi-UAV-assisted multi-edge server system architecture that integrates static load balancing at base stations with dynamic resource scheduling via drones. We define a joint optimization problem for load balancing and UAV access, addressing the limitations of traditional static load balancing methods in extreme high-load scenarios. Our contributions include an aerial-ground collaborative load balancing framework, a hierarchical optimization and game-theoretic model, and experimental validation demonstrating significant performance improvements.
We consider a network architecture comprising numerous base stations equipped with edge servers and multiple Unmanned Aerial Vehicles connected as auxiliary edge nodes. In this system, each base station serves as a core edge node, while drones function as supplementary edge nodes. Both base station and drone-based edge servers utilize heterogeneous processors, meaning their computational capabilities differ. User Equipment can offload computationally intensive tasks to nearby core edge nodes based on proximity, leveraging their resources. Additionally, drones, as auxiliary edge nodes, can be strategically deployed to receive computational tasks from overloaded core edge nodes.
The network system includes M base stations and N drones, with edge server sets denoted as $$\mathcal{M} = \{e_1^c, e_2^c, \cdots, e_M^c\}$$ and $$\mathcal{N} = \{e_1^u, e_2^u, \cdots, e_N^u\}$$, respectively. The positions of base station edge $$e_i^c$$ and drone edge $$e_i^u$$ are given by $$p_i^c = (x_i^c, y_i^c, 0)$$ and $$p_i^u = (x_i^u, y_i^u, H)$$, where H is the drone altitude. We assume computational tasks offloaded by UE to core edge nodes follow a Poisson distribution, defined as “arrival tasks.” The task arrival rates for base station edge servers are represented by the set $$\boldsymbol{\lambda} = \{\lambda_1, \lambda_2, \cdots, \lambda_M\}$$, where $$\lambda_i$$ is the arrival rate at base station edge server $$e_i^c$$.
The fundamental issue in traditional edge computing systems is the mismatch between static resource pre-configuration and dynamic user demands. Existing solutions often rely on overall task migration, which may simply shift overload to other nodes rather than resolving imbalance. Our aerial-ground collaborative framework employs a partial task offloading strategy, where arrival tasks are subdivided based on real-time load conditions and distributed across edge nodes for cooperative load balancing.
For transmission models, base station edge nodes are interconnected via dedicated cable networks, with independent channels supporting serialized data transmission to maintain integrity and order. The transmission time for a unit task between base station edge servers is defined by the matrix:
$$ \mathbf{D}^c = \begin{bmatrix} d_{1,1}^c & \cdots & d_{1,M}^c \\ \vdots & \ddots & \vdots \\ d_{M,1}^c & \cdots & d_{M,M}^c \end{bmatrix} $$
where $$d_{i,j}^c$$ is the time for base station edge server $$e_i^c$$ to offload a unit task to $$e_j^c$$. If no connection exists, $$d_{i,j}^c = 0$$. The set of base station edge servers connected to $$e_i^c$$ is $$\mathbf{L}_i^c = \{e_j^c | e_j^c \in \mathcal{N}, d_{i,j}^c \neq 0\}$$.
For drone-to-base station communication, we assume line-of-sight channels dominate, and orthogonal frequency division multiple access is used, minimizing interference. The uplink channel gain from base station to drone is:
$$ g_{c2u}(e_i^c, e_j^u) = \alpha_0 d(e_i^c, e_j^u)^{-2} $$
where $$\alpha_0$$ is the channel gain constant at 1 m reference distance, and $$d(e_i^c, e_j^u)$$ is the uplink distance. The uplink data rate is:
$$ R_{c2u}(e_i^c, e_j^u) = B \log_2 \left(1 + \frac{g_{c2u}(e_i^c, e_j^u) P_c}{\sigma^2}\right) $$
where $$\sigma^2$$ is white Gaussian noise variance, B is bandwidth, and $$P_c$$ is base station transmit power. We ignore downlink transmission time for results due to smaller data sizes. The transmission time matrix between base station and drone edges is:
$$ \mathbf{D}^u = \begin{bmatrix} d_{1,1}^u & \cdots & d_{1,N}^u \\ \vdots & \ddots & \vdots \\ d_{M,1}^u & \cdots & d_{M,N}^u \end{bmatrix} $$
where $$d_{i,j}^u = \frac{1}{R_{c2u}(e_i^c, e_j^u)}$$.
For task offloading, let $$\mathbf{X}_i$$ be the offloading vector for base station edge $$e_i^c$$. The offloading matrix across edge servers is:
$$ \mathbf{X} = \begin{bmatrix} \mathbf{X}_1 \\ \vdots \\ \mathbf{X}_M \end{bmatrix} = \begin{bmatrix} \mathbf{X}_1^{c2c} \\ \vdots \\ \mathbf{X}_M^{c2c} \end{bmatrix} \cup \begin{bmatrix} \mathbf{X}_1^{c2u} \\ \vdots \\ \mathbf{X}_M^{c2u} \end{bmatrix} $$
Here, $$\mathbf{X}_i^{c2c} = [x_{i,1}^{c2c} \cdots x_{i,M}^{c2c}]$$ is the base station offloading vector, where $$x_{i,j}^{c2c}$$ is the task amount offloaded from $$e_i^c$$ to $$e_j^c$$. When $$i = j$$, it represents locally executed tasks. $$\mathbf{X}_i^{c2u} = [x_{i,1}^{c2u} \cdots x_{i,N}^{c2u}]$$ is the drone offloading vector. We assume drones assist only one base station edge per time slot, hovering directly above it. If drone edge $$e_j^u$$ is not accessing base station edge $$e_i^c$$, then $$x_{i,j}^{c2u} = 0$$. The set of drone edge servers accessing base station edge $$e_i^c$$ is $$\mathbf{L}_i^u$$.
For computation, the service rates of base station and drone edge servers are $$\mathbf{F}^c = \{f_1^c, \cdots, f_M^c\}$$ and $$\mathbf{F}^u = \{f_1^u, \cdots, f_N^u\}$$, respectively. The load task arrival rate for base station edge $$e_i^c$$ is:
$$ w_i^c = \lambda_i + \sum_{e_j^c \in \mathbf{L}_i^c, e_i^c \neq e_j^c} x_{j,i}^{c2c} – \sum_{e_j^c \in \mathbf{L}_i^c, e_i^c \neq e_j^c} x_{i,j}^{c2c} – x_{i,k}^{c2u} $$
If no drone is accessed, $$x_{i,k}^{c2u} = 0$$. Similarly, the load for drone edge $$e_k^u$$ is $$w_k^u$$. We model the service system using an M/M/1 queue. The average queueing delay at base station edge $$e_i^c$$ is:
$$ T_i^{wait}(w_i^c) = \frac{w_i^c}{f_i^c (f_i^c – w_i^c)} $$
The computation execution delay for task w is:
$$ T_i^{proc}(w) = \frac{w}{f_i^c} $$
The task response delay includes both queueing and computation delays:
$$ T_i^{resp}(w, w_i^c) = T_i^{wait}(w_i^c) + T_i^{proc}(w) $$
The average completion time for tasks migrated from $$e_i^c$$ to $$e_j^c$$ is:
$$ T_{i,j}^{total}(x_{i,j}^{c2c}, w_j^c, f_j^c, d_{i,j}^c) = \frac{1}{\lambda_i} \left[ \frac{w_j^c}{f_j^c (f_j^c – w_j^c)} + \frac{x_{i,j}^{c2c}}{f_j^c} + x_{i,j}^{c2c} d_{i,j}^c \right] $$
Similarly, for tasks offloaded to drones, the completion time is $$T_{i,k}^{total}(x_{i,k}^{c2u}, w_k^u, f_k^u, d_{i,k}^u)$$.
We formulate the problem using game theory, focusing on rational decision-making to minimize average task completion time and achieve system-wide load balancing. Given that drones access only one base station per time slot and both use queueing models, we integrate drone offloading into base station queues. Define the utility function for base station edge $$e_i^c$$ as:
$$ Q_i = \frac{1}{\lambda_i} \left\{ \sum_{e_j^c \in \mathbf{L}_i^c} \left[ \frac{w_j^c}{f_j^{c’} (f_j^{c’} – w_j^c)} + \frac{x_{i,j}^{c2c}}{f_j^{c’}} \right] + \sum_{e_j^c \in \mathbf{L}_i^c, e_i^c \neq e_j^c} x_{i,j}^{c2c} d_{i,j}^c \right\} $$
where $$f_j^{c’} = f_j^c + \sum_{e_k^u \in \mathbf{L}_i^u} f_k^{u’}$$, and $$f_k^{u’}$$ is the average task service rate of drone edge $$e_k^u$$ for unit tasks. The optimization problem for base station edge $$e_i^c$$ is:
$$ \min_{\mathbf{X}_i, \mathbf{L}_i^u} Q_i(\mathbf{X}_i, \mathbf{X}_{-i}) $$
subject to constraints: C1: $$x_{i,j}^{c2c} \geq 0$$, C2: $$\sum_{e_j^c \in \mathbf{L}_i^c} x_{i,j}^{c2c} = \lambda_i$$, C3: $$\sum_{e_j^c \in \mathbf{L}_i^c} x_{i,j}^{c2c} < f_i^{c’}$$, C4: $$\mathbf{L}_i^u = \emptyset \mid \mathbf{L}_i^u \subseteq \mathcal{N}$$, C5: $$\mathbf{L}_i^u \cap \mathbf{L}_j^u = \emptyset$$.
We model this as a non-cooperative game with incomplete information: $$G = \langle \mathcal{M}, \{\mathbf{X}_i\}_{e_i^c \in \mathcal{M}}, \{Q_i\}_{e_i^c \in \mathcal{M}} \rangle$$. A Nash equilibrium point $$\mathbf{X}^* = \{\mathbf{X}_1^*, \cdots, \mathbf{X}_M^*\}$$ satisfies for all $$e_i^c$$:
$$ Q_i(\mathbf{X}_i^*, \mathbf{X}_{-i}^*) \leq Q_i(\mathbf{X}_i’, \mathbf{X}_{-i}^*) $$
for any alternative strategy $$\mathbf{X}_i’$$. We prove the existence of a Nash equilibrium by showing that the offloading strategy set $$\mathbf{X}_i$$ is closed and convex, and the utility function $$Q_i$$ is continuously differentiable and strictly convex under fixed strategies of other players. The Hessian matrix of $$Q_i$$ is positive definite, ensuring strict convexity. By converting the game to a variational inequality problem and proving the gradient’s strict monotonicity, we guarantee at least one Nash equilibrium exists.
We propose the UAV Adaptive Distributed Load Balancing Algorithm to solve this problem. UADLBA decouples the problem into base station load balancing and UAV access strategy subproblems. The algorithm first determines UAV access adaptively, then uses a distributed non-cooperative game for base station load balancing. In phase one, drones access base stations with the highest load ratios. In phase two, base stations iteratively optimize their offloading strategies using convex optimization until convergence to Nash equilibrium.
For experimental validation, we simulate a high-load edge computing environment using a Shanghai base station dataset. Parameters are set as follows:
| Parameter | Meaning | Value |
|---|---|---|
| M | Number of base stations | 30 |
| N | Number of drones | 10 |
| H | Drone altitude | 100 m |
| $$\lambda_i$$ | Task arrival rate | N(10, 4) |
| $$f_i^c$$ | Base station service rate | N(15, 6) |
| $$f_i^u$$ | Drone service rate | N(2, 1) |
| $$\alpha_0$$ | Unit channel gain | $$3 \times 10^{-4}$$ |
| $$\sigma^2$$ | Noise variance | -100 dBm/Hz |
| B | Channel bandwidth | 1 MHz |
| $$P_c$$ | Base station transmit power | 10 W |
| $$\delta$$ | Base station upload task ratio | 0.1 |
| $$\mathbf{D}^c$$ | Base station transmission delay | [0.05, 0.2] |
| $$\varepsilon$$ | Iteration precision | 0.001 |
We compare UADLBA with baseline methods: Local (no offloading), ULocal (drone access without offloading), SBOA (distributed load balancing without drones), and PSOGA (centralized optimization with drones). Results show that UADLBA converges to Nash equilibrium within limited iterations, with task loads and completion times stabilizing. In low, medium, and high-load scenarios, UADLBA reduces average utility by 30.1%, 32.8%, and 41.3% compared to Local, and by 3.7%, 8.1%, and 29.0% compared to SBOA, while being slightly worse than PSOGA by 0.9%, 1.3%, and 1.5%. However, UADLBA achieves significantly faster decision times, making it suitable for real-time edge environments.
| Scenario | UADLBA | SBOA | PSOGA |
|---|---|---|---|
| Low | 1.43 | 1.31 | 1234.17 |
| Medium | 1.46 | 1.36 | 1336.32 |
| High | 1.48 | 1.37 | 1387.61 |
In conclusion, we investigated load balancing in drone-assisted edge systems, decomposing the problem into base station load balancing and UAV access subproblems. By modeling base station load balancing as a non-cooperative game with proven Nash equilibrium, we developed the UADLBA method, which combines adaptive drone access with distributed game-theoretic optimization. Simulations demonstrate that UADLBA outperforms benchmarks in execution time and average task completion delay, adapting well to extreme high-load scenarios. Future work will explore more complex drone access strategies, refined edge computing models incorporating energy consumption, and multi-dimensional optimization to better reflect real-world system characteristics. The integration of drone technology and Unmanned Aerial Vehicles continues to offer promising avenues for enhancing edge computing performance.
