1. Introduction
Modern warfare increasingly relies on unmanned aerial vehicle (UAV) swarms for missions requiring distributed intelligence, real-time response, and resilience. However, limitations in onboard computational resources and dynamic battlefield conditions necessitate innovative network architectures. We propose a cloud-fog-edge integrated network architecture to optimize distributed computing efficiency and robustness for unmanned aerial vehicle swarm combat systems. This architecture synergizes:

- Cloud computing for global data analytics,
- Fog computing for agile resource orchestration,
- Edge computing for latency-sensitive task execution.
2. Hierarchical System Design
2.1. Operational Structure
The unmanned aerial vehicle swarm combat system adopts a layered command hierarchy:
- Large UAV clusters act as fog nodes, providing signal coverage and resource allocation.
- Small UAV clusters support infantry units via portable multi-functional unmanned aerial vehicles.
Table 1: UAV Cluster Composition
Cluster Type | Components | Primary Role |
---|---|---|
Large UAV | Fog server UAVs, relay UAVs, fire-support UAVs | Fog layer resource provisioning |
Small UAV | Infantry-portable multi-functional UAVs | Edge data collection & local compute |
2.2. Cloud-Fog-Edge Network Architecture
The three-tiered topology enables efficient task distribution:
- Cloud Layer: Centralized data centers for predictive analytics.
- Fog Layer: Large unmanned aerial vehicle clusters processing filtered data.
- Edge Layer: Small UAVs/soldier terminals for real-time preprocessing.
*Table 2: Layer-Specific Functions*
Layer | Components | Function |
---|---|---|
Cloud | High-performance servers | Strategic decision-making, big data analysis |
Fog | Large UAV clusters | Data aggregation, bandwidth-sensitive task offloading |
Edge | Soldier devices/small UAVs | Data preprocessing, ultra-low-latency responses (e.g., path planning) |
3. Distributed Computing Modes
3.1. Edge Computing
For latency-critical tasks (e.g., navigation), small unmanned aerial vehicle units offload computations to nearby edge servers:tedge=DBedge+D⋅ICedgetedge=BedgeD+CedgeD⋅I
where DD = data size, BedgeBedge = bandwidth, II = instructions/byte, CedgeCedge = server compute power.
3.2. Fog-Edge Collaborative Computing
Localized analysis avoids cloud latency. Tasks route through fog-layer unmanned aerial vehicles using multi-hop links:tfog-edge=max(DBij)+max(LjCj)∀(vi,vj)∈Etfog-edge=max(BijD)+max(CjLj)∀(vi,vj)∈E
3.3. Cloud-Fog-Edge Coordination
Global tasks leverage all layers:
- Edge: Data filtering.
- Fog: Task prioritization.
- Cloud: Heavy computations.
4. Generalized Diffusion Load Balancing (GDA)
4.1. Problem Formulation
The unmanned aerial vehicle network is modeled as an undirected graph G=(V,E)G=(V,E), where VV = nodes (UAVs/servers), EE = communication links. Total task delay comprises:ttotal=tp+td+tc+tbttotal=tp+td+tc+tb
- tptp: Load-info exchange delay.
- tdtd: Load-transfer delay.
- tctc: Compute delay.
- tbtb: Result-return delay.
4.2. Optimization Objective
Minimize ttotalttotal subject to load equilibrium:min[∑k=1nmax(vi,vj)∈E(∣Δijk∣bij)+maxi(liCi)]min[k=1∑n(vi,vj)∈Emax(bij∣Δijk∣)+imax(Cili)]s.t. ∑i=0pli=Ls.t. i=0∑pli=L
4.3. GDA Algorithm
Input: \(G, L, W, C, l\), threshold \(\delta\) 1. Overloaded node \(v_{ol}\) computes diffusion matrix \(M(\epsilon)\) 2. **while** TRUE: 3. **for** each node \(v_i \in V\): 4. Exchange load \(l_i^k\) with neighbors 5. Compute load transfer \(\Delta_{ij} = m_{ji}l_i - m_{ij}l_j\) 6. Update load: \(l_i^{k+1} = l_i^k - \sum \Delta_{ij}\) 7. **if** \(|l_i^{k+1} - l_i^k| < \delta\): mark balanced 8. **if** all nodes balanced: BREAK
5. Performance Validation
5.1. Simulation Setup
- Nodes: 5 edge, 6 fog, 1 cloud (Table 3).
- Bandwidth: 80–110 Mbps (heterogeneous).
- Workload: 5–25 MB.
Table 3: Compute Node Capabilities (MIPS)
Node | v0v0 | v1v1 | … | v11v11 |
---|---|---|---|---|
MIPS | 102 | 50 | … | 5000 |
5.2. Key Results
- Edge vs. Cloud-Only:
- For 25 MB tasks, edge computing reduces latency by 40% vs. cloud-only (requires ≥109.7 Mbps).
- Fog-Edge vs. Cloud:
- Fog-edge achieves 60% lower delay at 100 Mbps bandwidth.
- Robustness:
- Cloud-fog-edge maintains stable latency (±8%) under bandwidth fluctuations (Fig 6c-d).
5.3. Fault Tolerance
- Edge Mode: Adding 1 edge server cuts latency by 35% (Fig 7a).
- Fog-Edge Mode: Loss of weak compute nodes (low MIPS) has marginal impact (Fig 7b).
- System States: Degrades gracefully under node failures (Fig 7c).
5.4. Load Balancing Efficiency
GDA outperforms alternatives:
- 25% faster than Smooth Weighted Round Robin (SWRR).
- 30% faster than GreedyLB in fog-edge scenarios (Fig 8).
6. Conclusion
Our cloud-fog-edge integrated architecture significantly enhances unmanned aerial vehicle swarm combat systems by:
- Reducing Latency: Edge/fog layers handle 70% of tasks below 100 ms.
- Improving Robustness: Tolerates 20% node failure with <15% performance drop.
- Optimizing Resource Usage: GDA cuts task delays by 30–40% vs. benchmark algorithms.
This work enables resilient, efficient unmanned aerial vehicle operations in bandwidth-constrained battlefields. Future extensions will integrate AI-driven task prediction.