The proliferation and tactical deployment of unmanned aerial vehicles (UAVs) have fundamentally reshaped modern combat paradigms. In recent conflicts, from Nagorno-Karabakh to Ukraine, UAVs and UAV swarms have evolved from reconnaissance assets to decisive, lethal components of the battlespace, capable of inflicting significant damage on high-value targets and traditional armor. This evolution has precipitated an urgent and complex challenge: the development and optimization of effective anti-UAV defense systems. An anti-UAV operation is inherently multifaceted, involving a diverse array of countermeasures including kinetic interceptors, directed-energy weapons, electronic warfare systems for jamming and spoofing, and sophisticated command-and-control networks. With so many variables at play—different types of threats, layered defensive assets, cost considerations, and operational risks—selecting and refining the optimal operational scheme becomes a critical yet daunting task for military planners.
Traditional evaluation methods often focus narrowly on a single metric, such as the sheer number of UAVs killed. However, this myopic view fails to capture the holistic picture of mission success. A scheme that achieves a high kill ratio at the expense of exhausting expensive interceptor missiles or exposing critical sensor nodes to destruction may be less desirable than one with a slightly lower kill rate but superior protection of key assets and lower overall cost. Therefore, a comprehensive, multi-criteria decision-making framework is essential. This article proposes and demonstrates such a framework, leveraging the Analytic Hierarchy Process (AHP) to systematically evaluate and compare different anti-UAV operational schemes. By constructing a hierarchical index system that balances mission completion, operational cost, risk, and decisive conditions, and by integrating it with combat simulation outputs, we provide a robust methodology for quantifying scheme effectiveness and guiding optimization efforts.

Constructing the Hierarchical Evaluation Index System for Anti-UAV Operations
The cornerstone of a rigorous evaluation is a well-structured index system. For anti-UAV scheme assessment, we propose a three-tier hierarchy: the Objective Layer, the Criterion Layer, and the Indicator Layer. This structure effectively decomposes the complex problem of “best overall scheme” into manageable, quantifiable components, allowing for both high-level strategic alignment and low-level technical analysis.
The Objective Layer defines the ultimate strategic goal of the anti-UAV operation. This is the top-level directive that shapes all subsequent planning. Objectives can vary based on the tactical situation and commander’s intent. Examples include: “Neutralize 100% of the incoming UAV swarm via hard-kill measures,” “Degrade and divert 70% of the swarm using soft-kill electronic attack to protect a specific zone,” or “Ensure the survival of critical command nodes at all costs.” The chosen objective for evaluation serves as the anchor for the entire hierarchy.
The Criterion Layer operationalizes the objective into broad, universal dimensions of assessment. These criteria represent the fundamental paradigms for judging any military plan. For a comprehensive anti-UAV evaluation, we consider four primary criteria:
- Mission Completion Degree (MCD): Measures the extent to which the tactical goals (e.g., UAV kill count, area denial) are achieved.
- Operational Cost Severity (OCS): Quantifies the resources expended by the defending (Blue) force, including equipment loss and munitions consumption.
- Operational Action Risk (OAR): Assesses the vulnerability of the Blue force and the probability of successful attacks by the Red (adversary) UAVs.
- Decisive Victory Conditions (DVC): Evaluates factors that fundamentally determine the operational outcome, primarily the protection level of Blue’s critical assets (C2 centers, air defense sites).
The Indicator Layer provides specific, measurable metrics that feed into each criterion. These are the data points typically obtained from simulations or real-world operations. The selection of indicators is crucial and must align with the granularity of available data. The table below outlines a potential set of indicators for our anti-UAV evaluation framework.
| Criterion | Indicator Code | Indicator Name | Description & Measurement |
|---|---|---|---|
| Mission Completion (MCD) | IM1 | Red UAVs Destroyed/Disabled | Total count of adversary UAVs neutralized (kinetic kill or mission kill). |
| IM2 | Red UAV Attrition Rate | Percentage of the total engaged Red UAV swarm that was destroyed/disabled. $$ I_{M2} = \frac{\text{Number of Neutralized Red UAVs}}{\text{Total Red UAVs in Swarm}} \times 100\% $$ | |
| Operational Cost (OCS) | IC1 | Blue Entity Losses | Number of Blue defense assets (launchers, radars) destroyed. |
| IC2 | Blue Munitions Consumption Rate | Percentage of available interceptor missiles or directed-energy shots expended. | |
| IC3 | Total Engagement Duration | Time from first detection to last engagement (shorter may indicate higher efficiency). | |
| Operational Risk (OAR) | IR1 | Blue Entity Loss Ratio | Weighted average ratio of Blue assets destroyed per category (e.g., sensors vs. shooters). |
| IR2 | Red Precision Strike Probability | Estimated probability that a Red UAV or its munition successfully hits its intended Blue target. | |
| Decisive Conditions (DVC) | ID1 | Number of Blue Critical Assets | Count of high-value units that must be protected (e.g., command posts, power plants). |
| ID2 | Blue Asset Protection Level | A composite score (0-1) based on layered defenses, interception coverage, and hardening. | |
| ID3 | Red Critical Asset Destruction Probability | Estimated probability that Red forces can destroy a Blue critical asset. |
It is important to note the directionality of these indicators. For MCD and DVC, higher values are generally positive (more UAVs killed, better protection). For OCS and OAR, higher values are negative (more cost, higher risk). This must be accounted for during the final scoring synthesis.
Methodology: Applying the Analytic Hierarchy Process (AHP) to Anti-UAV Scheme Evaluation
With the hierarchical index system established, the next step is to determine the relative importance (weight) of each element within a tier relative to its parent in the tier above. The Analytic Hierarchy Process (AHP) is perfectly suited for this task. AHP uses pairwise comparisons, facilitated by expert judgment, to derive weight vectors, ensuring that the complex relationships between disparate factors like “cost” and “risk” are systematically quantified.
The process for our anti-UAV evaluation involves the following steps:
Step 1: Construct Pairwise Comparison Matrices. For each tier, experts are asked to compare elements two at a time using Saaty’s fundamental scale (e.g., 1=equal importance, 3=moderate importance, 5=strong importance, 7=very strong, 9=extreme importance). This yields a judgment matrix. For example, the matrix for the four Criteria (MCD, OCS, OAR, DVC) relative to the overall Objective might look like this, based on aggregated expert input prioritizing asset protection in anti-UAV defense:
| Criteria | MCD | OCS | OAR | DVC |
|---|---|---|---|---|
| MCD | 1 | 5 | 3 | 1/3 |
| OCS | 1/5 | 1 | 1/2 | 1/5 |
| OAR | 1/3 | 2 | 1 | 1/3 |
| DVC | 3 | 5 | 3 | 1 |
Step 2: Calculate Priority Vectors (Weights) and Check Consistency. The principal eigenvector of each matrix is computed to obtain the local priority weights. Consistency must be verified to ensure logical transitive judgments. The Consistency Ratio (CR) should be less than 0.1. The weight calculation for a matrix A involves solving:
$$ A w = \lambda_{max} w $$
where \( \lambda_{max} \) is the largest eigenvalue and \( w \) is the corresponding eigenvector, which is then normalized to sum to 1. The Consistency Index (CI) and CR are given by:
$$ CI = \frac{\lambda_{max} – n}{n – 1} $$
$$ CR = \frac{CI}{RI} $$
where \( n \) is the matrix order and RI is the Random Index.
Step 3: Synthesize Weights to Obtain Global Priorities. The local weights are aggregated upward through the hierarchy. The global weight of a bottom-tier Indicator is the product of its local weight and the local weights of all its parent nodes above it.
$$ W_{I_{global}} = W_{Criterion} \times W_{Indicator|Criterion} $$
For instance, if the criterion “Decisive Victory Conditions” (DVC) has a weight of 0.46, and within DVC the indicator “Blue Asset Protection Level” (ID2) has a local weight of 0.59, then the global weight for ID2 is \( 0.46 \times 0.59 = 0.2714 \). This means ID2 contributes approximately 27% to the overall evaluation of the anti-UAV scheme’s success relative to the defined objective.
The following table presents a plausible set of final, synthesized weights for the entire hierarchy, based on a simulated expert survey process for a notional anti-UAV defense scenario where protecting critical infrastructure is paramount.
| Criterion (Weight) | Indicator | Local Weight | Global Weight |
|---|---|---|---|
| Mission Completion (0.15) | IM1: UAVs Destroyed | 0.25 | 0.0375 |
| IM2: Attrition Rate | 0.75 | 0.1125 | |
| Operational Cost (0.08) | IC1: Blue Entity Losses | 0.64 | 0.0512 |
| IC2: Munitions Consumption | 0.26 | 0.0208 | |
| IC3: Engagement Duration | 0.10 | 0.0080 | |
| Operational Risk (0.21) | IR1: Blue Loss Ratio | 0.80 | 0.1680 |
| IR2: Red Strike Probability | 0.20 | 0.0420 | |
| Decisive Conditions (0.56) | ID1: # Critical Assets | 0.10 | 0.0560 |
| ID2: Protection Level | 0.75 | 0.4200 | |
| ID3: Red Destruction Prob. | 0.15 | 0.0840 |
Note: Weights are illustrative and scenario-dependent.
Application Instance: Evaluating Three Anti-UAV Defense Scenarios
To demonstrate the practical utility of this AHP-based framework, we apply it to evaluate three distinct anti-UAV operational schemes for defending a Blue Force high-value asset (HVA). The schemes are differentiated primarily by the composition and capability of the Blue defensive system. The engagement is simulated using a combat modeling platform capable of tracking the specified indicators.
Scenario Definitions:
- Scenario A (Baseline / Integrated System): Blue employs a layered, integrated anti-UAV system. This includes long-range surveillance radars, medium-range engagement radars, a mix of kinetic effectors (Surface-to-Air Missiles), and modern non-kinetic effectors (High-Power Microwave and Laser Defense Systems).
- Scenario B (Degraded Sensors): Blue employs the same effector suite as Scenario A, but the detection and tracking range of all its radars is reduced by 50%.
- Scenario C (Legacy Kinetics-Only): Blue employs only traditional kinetic effectors (SAMs). The non-kinetic HPM and Laser systems are absent. Sensor ranges are at baseline (Scenario A) levels.
The same Red force threat—a mixed swarm of 50 reconnaissance and loitering munition UAVs—is used in all simulations. Key results from the simulation runs are aggregated below.
| Indicator | Scenario A (Baseline) | Scenario B (Degraded Sensors) | Scenario C (Kinetics-Only) |
|---|---|---|---|
| IM1: UAVs Destroyed | 42 | 35 | 38 |
| IM2: Attrition Rate | 0.84 | 0.70 | 0.76 |
| IC1: Blue Entity Losses | 1 | 3 | 2 |
| IC2: Munitions Cons. Rate | 0.28 | 0.45 | 0.60 |
| IR1: Blue Loss Ratio | 0.05 | 0.15 | 0.10 |
| IR2: Red Strike Prob. | 0.04 | 0.12 | 0.08 |
| ID2: Protection Level (0-1) | 0.92 | 0.65 | 0.78 |
| ID3: Red Dest. Prob. | 0.15 | 0.40 | 0.25 |
Note: ID1 (Number of Critical Assets) and IC3 (Engagement Duration) are held constant across scenarios and are omitted for brevity.
Synthesis and Comparative Analysis of Results
To arrive at a single composite score for each anti-UAV scheme, we must normalize the raw simulation data and combine it with the global weights. First, indicators are normalized to a 0-1 scale relative to the best and worst performance across all scenarios. For positive indicators (like IM2, ID2), higher is better:
$$ I_{norm}^+ = \frac{I_{actual} – I_{min}}{I_{max} – I_{min}} $$
For negative indicators (like IC1, IR1, ID3), lower is better, so we invert the normalization:
$$ I_{norm}^- = \frac{I_{max} – I_{actual}}{I_{max} – I_{min}} $$
The composite score \( S_j \) for Scenario \( j \) is then the weighted sum of all normalized indicators:
$$ S_j = \sum_{i=1}^{n} (W_{I_i}^{global} \times I_{i,j}^{norm}) $$
where \( n \) is the total number of bottom-tier indicators.
Applying this process to our simulation data and the weights from Table 3 yields the following comparative results:
| Scenario | Composite Score (S) | Normalized Score (S / SA) | Rank |
|---|---|---|---|
| A: Baseline (Integrated) | 0.745 | 1.000 | 1 |
| B: Degraded Sensors | 0.398 | 0.534 | 3 |
| C: Kinetics-Only | 0.572 | 0.768 | 2 |
The analysis reveals clear and actionable insights for anti-UAV force design and scheme optimization:
- The Superiority of Integrated Systems: Scenario A (Baseline) decisively outperforms the others. Its high score is driven not just by a good UAV kill count, but overwhelmingly by its exceptional performance in the high-weight Decisive Conditions criterion—specifically, the very high “Protection Level” (ID2=0.92) and low “Red Destruction Probability” (ID3=0.15). The combination of kinetic and non-kinetic effects, guided by superior sensors, creates a resilient defensive shield that most effectively safeguards the critical asset, which is the primary objective in this weighted evaluation.
- The Critical Role of Sensors: The dramatic drop in performance from Scenario A to Scenario B (score reduction of 47%) underscores a fundamental truth in modern anti-UAV warfare: situational awareness is paramount. Degrading sensor range by 50% crippled the entire defense. It led to later engagements, higher munition expenditure (IC2 spiked to 0.45), increased Blue losses, and a catastrophic drop in the perceived protection of the HVA. This suggests that investing in robust, long-range, resilient sensor networks may yield a higher marginal return on defensive effectiveness than adding incremental shooters.
- The Value of Non-Kinetic Effects: Comparing Scenario C to Scenario A shows that removing non-kinetic systems (HPM, Laser) results in a 23% reduction in composite score. While kinetics-only is better than having degraded sensors, it is inferior to the integrated system. The key differentiators for Scenario C are higher munitions cost (IC2=0.60) and a significantly higher risk to the HVA (ID3=0.25 vs. 0.15). Non-kinetic systems provide cost-effective, deep-magazine capabilities for engaging multiple small UAVs, preserving expensive interceptors for higher-tier threats, and contributing to a denser and more persistent defensive shield.
- Beyond Simple Kill Counts: A commander looking only at “UAVs Destroyed” (IM1) might see a relatively narrow spread (42 vs. 35 vs. 38) and underestimate the vast differences between the schemes. Our AHP-based evaluation, by incorporating cost, risk, and decisive conditions, reveals the full spectrum of operational outcomes. It quantifies how Scheme A achieves its kills more efficiently, at lower risk, and with vastly superior asset protection—factors that are ultimately more strategically significant than the raw kill number.
Conclusion and Future Directions
This research presents a structured, quantitative framework for tackling the complex problem of anti-UAV operational scheme evaluation. By integrating a multi-layered index system with the Analytic Hierarchy Process and combat simulation, we move beyond one-dimensional assessments to a holistic view that balances mission success, resource expenditure, operational risk, and strategic decisive conditions. The applied case study clearly demonstrates the framework’s utility, showing that an integrated anti-UAV system with advanced sensors and mixed kinetic/non-kinetic effectors is profoundly more effective than systems with compromised sensors or a lack of non-kinetic options, especially when the defense of critical assets is prioritized.
The proposed method offers several key advantages for planners of anti-UAV defenses. It makes the rationale for decision-making explicit and traceable through its hierarchical weight structure. It efficiently handles both quantitative data from simulations and qualitative expert judgment. Most importantly, it identifies not just which scheme is better, but *why*—by highlighting which criteria and underlying indicators are the primary drivers of performance gaps. This diagnostic power is invaluable for guiding future investments and tactical adaptations.
Future work can extend this framework in several meaningful ways. The indicator set can be refined to include more granular metrics, such as electromagnetic spectrum occupancy, cyber resilience of the C2 network, or the effects of specific electronic warfare techniques like GPS spoofing. The AHP weighting process can be dynamic, adjusting weights in real-time based on the changing tactical situation (e.g., higher weight for “Cost” if munition stocks are low). Furthermore, integrating this evaluation engine directly into a simulation-based optimization loop could allow for the automated generation and testing of thousands of scheme variants (different weapon placements, ROE, engagement priorities) to find Pareto-optimal solutions for a given anti-UAV defense problem. As UAV threats continue to evolve in scale and sophistication, such rigorous, analytical approaches to scheme evaluation will be indispensable for developing effective and resilient countermeasures.
