The industrial application of Artificial Intelligence (AI) necessitates a fusion of knowledge from multiple disciplines and specialties. Exposing students from diverse academic backgrounds to AI technologies early on is crucial, enabling them to integrate these technologies with their core expertise for “AI-enabled” professional development. However, AI algorithms are often abstract and challenging to grasp, requiring contextualization within concrete applications for effective teaching. Drone technology serves as an excellent, tangible medium to demonstrate these concepts vividly. For instance, the visual intelligence, smart collaboration, and planning involved in a drone light show are successful real-world applications of AI technologies like deep learning and visual recognition. Through project-based learning, students can bridge the gap between theoretical knowledge and practical application, fostering their interest in AI, developing interdisciplinary application skills, and enhancing hands-on practical and innovative thinking.

Our educational module, centered on robotics and AI, is a transdisciplinary course designed to introduce the fundamental ideas, theories, methods, and applications of AI. It incorporates cutting-edge knowledge from both AI and robotics, aiming to equip students with the comprehensive skills needed to participate in AI projects. The instructional strategy employs flipped classrooms and process-based evaluation to enhance student initiative and interaction. To help students from varied disciplines understand abstract algorithms and models, we integrate AI technology into applied systems, deepening their comprehension of methods, procedures, and principles through hands-on experimentation. Student teams undertake projects involving humanoid robots, smart vehicles, and drone swarms, assuming different roles to appreciate the contribution of distinct专业知识 to an AI project, thereby achieving interdisciplinary integration and personalized development.
Multi-agent collaboration is a vital branch of distributed AI and a key teaching component, covering theories like multi-agent planning, negotiation, and interaction mechanisms, with broad applications in military, transportation, and healthcare fields. A drone swarm is a significant carrier and testbed for multi-agent collaboration. In recent years, commercial drone light show performances have captivated public attention and ignited student curiosity. To help students better understand multi-agent path planning technology, we introduced a drone swarm formation experiment. Students collaborate to design a drone light show performance, exploring methods for optimizing drone flight paths and experiencing the development workflow of multi-agent collaborative systems.
Background and Rationale for the Virtual Experiment
Currently, conducting physical drone experiments, especially swarm formation exercises, is prohibitively challenging in an academic setting. The obstacles are multifaceted: stringent airspace regulations for flight vehicles, significant safety concerns during operation, substantial time investment, and complex personnel and equipment management. In traditional pedagogy, this content is often confined to theoretical lectures, resulting in low efficiency, poor experiential learning, and difficulty in stimulating student engagement. This project employs virtual simulation technology to immerse students in the entire process of designing a drone light show. Through stages like formation design, path optimization, and lighting choreography, students experience the AI project design workflow and collaborative dynamics. They master the characteristics of drone swarm flight, comprehend the principles of multi-agent coordination, explore methods to minimize collision probability, and ultimately enhance their practical innovation capabilities.
Experiment Objectives
The primary objectives of this virtual simulation experiment are twofold:
1. Foundational Knowledge and Skill Integration: To help students grasp the fundamental principles of drone swarm formation transformation. By working through a practical case study, students practice the operational processes of formation design, transformation, and lighting control, mastering the basic APIs for drone flight and lighting. Completing a drone swarm array transformation project allows them to appreciate the role of different disciplines within an AI project team, cultivate an AI mindset, and achieve an organic fusion of knowledge, skills, and competency.
2. Advanced Exploration and Capability Cultivation: To guide students in exploring trajectory control and risk mitigation methods, encouraging them to investigate the use of AI techniques for planning optimal flight paths to reduce the probability of collisions within the swarm system. This fosters their comprehensive ability to solve complex problems. The virtual platform breaks the spatiotemporal constraints of physical teaching facilities, enabling students to conduct virtual operations anytime, anywhere. It familiarizes them with drone swarms as a crucial AI technology载体, significantly enhancing the intuitiveness and experiential quality of the learning process.
Experiment Principles, Pedagogy, and Methodology
This experiment simulates the dynamic process of drone swarm formation transformation using 3D modeling, animation simulation, 3D human-computer interaction, and visualization techniques. The overall principle is illustrated in the block diagram below. The transformation between different swarm arrays is achieved through centralized control by a central processing platform.
Principles in a Physical Scenario
Trajectory Control Principle: The control system must pre-program the position, lighting, and flight path方案 for each drone. This plan is then executed via each drone’s onboard flight control, positioning, and communication systems to achieve the swarm performance.
| Subsystem | Component & Technology | Virtual Simulation Focus |
|---|---|---|
| Flight Control System | Simulates programmable quadcopters. Uses virtual sensors (pressure, accelerometer, gyroscope) to provide data on acceleration, velocity, altitude, and tilt to the central platform. | Implements a 3D coordinate system to determine position and velocity. Simulates sensor data streams for student observation. |
| Positioning System | Outdoor: Differential GPS (DGPS) for high precision. A ground-based reference station provides correction data. | Simulates coordinate transformation from latitude/longitude to XYZ. Represents the positioning process without simulating the DGPS protocol in detail. |
| Communication System | Central-to-drone link via mobile network (4G/5G) or radio (e.g., IEEE 802.5 token ring). Token ring ensures orderly, collision-free communication. | System-managed communication between virtual drones and central controller. Simulates the communication process and logic, abstracting the underlying protocol. |
Drone Programmable Control Interface
The central control platform manages the swarm’s transformation through a set of programmable APIs. It first gathers data (position, velocity, acceleration) from each drone via the communication system, then sends flight control commands based on pre-defined trajectories, and finally makes fine adjustments based on updated sensor data. This cycle is enabled by the following core APIs, which are faithfully simulated in the virtual environment:
| API Function | Description | Virtual Implementation |
|---|---|---|
getX(), getY(), getZ() |
Retrieves the drone’s 3D coordinates. | Queries the virtual drone’s state in the 3D simulation engine. |
getAcc(), getSpeed() |
Retrieves current acceleration and velocity vectors. | Calculated based on the simulated physics model. |
setLight(sequence) |
Sets the LED lighting color/pattern sequence. | Controls the visual skin and lighting effects on the 3D model. |
moveX(speed), moveY(speed), moveZ(speed) |
Commands movement along local axes (forward/back, left/right, up/down). | Applies velocity change to the drone’s simulated physics body. |
moveTo(targetX, targetY, targetZ, speed) |
Commands movement to an absolute coordinate at a specified speed. | Calculates path and invokes lower-level movement functions. |
stay() |
Commands the drone to hover in place. | Sets velocity to zero and activates stability control in the simulation. |
Collision Risk Optimization Principle
Orchestrating a safe and efficient drone light show transformation requires meticulous flight path planning. The primary objective is to minimize the probability of mid-air collisions while adhering to the limited battery life, which constrains total flight distance. The core risk metric is the pairwise collision probability between any two drones (i and j) during the entire maneuver from formation A to formation B, discretized into time steps (e.g., 1-second intervals).
The collision probability $p_{i,j}$ between drone $i$ and drone $j$ can be modeled using a normal distribution, where the likelihood increases exponentially as the distance between them decreases:
$$ p_{i,j} = \frac{1}{\sqrt{2\pi}\sigma} \exp\left(-\frac{(d_{min}(i, j) – \mu)^2}{2\sigma^2}\right) $$
Here, $\mu$ is the mean and $\sigma$ is the standard deviation of the distribution. The critical distance $d_{min}(i, j)$ is the minimum Euclidean distance between the two drones at any time step during the $T$-second transformation:
$$ d_{min}(i, j) = \min_{t=1}^{T} \sqrt{(x^i_t – x^j_t)^2 + (y^i_t – y^j_t)^2 + (z^i_t – z^j_t)^2} $$
For a safe drone light show, the system must ensure that the maximum pairwise collision probability remains below an extremely low threshold (e.g., $10^{-6}$):
$$ p_{i,j} < 10^{-6}, \quad \forall i,j $$
Simultaneously, to conserve battery power, the total flight distance $d_i$ for each drone $i$ must be less than a maximum allowable distance $d_{max}$. The flight distance is calculated as the sum of distances between consecutive waypoints $(x^n_i, y^n_i, z^n_i)$ in its path, where $N$ is the number of segments:
$$ d_i = \sum_{n=1}^{N} \sqrt{(x^n_i – x^{n-1}_i)^2 + (y^n_i – y^{n-1}_i)^2 + (z^n_i – z^{n-1}_i)^2} $$
To simplify the planning problem, the transformation is constrained to use a limited number of key waypoints (e.g., 3). The optimization challenge is therefore a multi-objective problem: find paths for all drones that satisfy $p_{i,j} < 10^{-6}$ while minimizing the maximum $d_i$ (or the total energy consumption). This typically has no analytical solution and is addressed using search algorithms like Genetic Algorithms (GA). Students are tasked with exploring this optimization, which serves as an open-ended problem allowing for the application of various AI, mathematical, or physics-based methods.
A simplified GA process for this can be summarized as follows:
| Step | Process in Drone Path Optimization |
|---|---|
| 1. Encoding | A chromosome represents the full set of 3D waypoints for all drones in the swarm. |
| 2. Initialization | Generate a random population of candidate path sets. |
| 3. Fitness Evaluation | Calculate fitness $F = w_1 * (1/P_{max}) + w_2 * (1/D_{max})$, where $P_{max}$ is the maximum $p_{i,j}$ and $D_{max}$ is the maximum $d_i$. Weights $w_1, w_2$ prioritize safety vs. distance. |
| 4. Selection | Select parent chromosomes with probability proportional to their fitness. |
| 5. Crossover & Mutation | Combine parent waypoints (crossover) and randomly perturb coordinates (mutation) to create offspring. |
| 6. Iteration | Repeat evaluation, selection, and reproduction until a satisfactory solution is found or generations limit is reached. |
Principles in the Virtual Simulation
The virtual environment provides a high-fidelity simulation of the key elements required to understand intelligent swarm协同. The core components and their simulation fidelity are:
1. Drone, Lighting, and Scene Modeling: Drone models are created from 3D scans of real quadcopter components, achieving 100%仿真度 for core mechanical parts. Lighting is simulated via dynamic texture mapping on the 3D model surface, with over 80% visual fidelity to real LED effects. The terrain and skybox provide spatial context with realistic lighting (sun position, ambient light). Physically-based animation controllers drive the takeoff, flight, and landing sequences.
2. Simulation of Control, Positioning, and Communication: A physics engine simulates the drone’s dynamics. Students can monitor virtual sensor readouts (position, velocity) in real-time. The positioning system abstraction converts between simulation coordinates and a mock GPS-like system. Communication is handled by the simulation kernel, ensuring command and data flow between the central controller and drones without低-level protocol emulation.
3. Simulation of the Programmable Interface: The control APIs listed in the table above are fully implemented in the simulation. Code written by students for physical drones can be ported directly into the virtual controller, compiled, and executed to command the virtual swarm identically.
4. Simulation of the Design Workflow: The experiment is structured into clear phases: Formation Design, Lighting Choreography, and Risk Control. Students use interactive 3D tools to design formations, plan light patterns, and immediately visualize the results via simulation. The system provides feedback, warning of high collision probability if paths are poorly planned.
5. Simulation of the Optimization Process: Students implement optimization algorithms (like GA) externally. The resulting path coordinates are fed into the simulation via the control APIs for验证. The simulation software logs drone positions at each time step, calculates the actual $p_{i,j}$ and $d_i$, and uses these metrics for objective performance evaluation of the student’s solution.
System Architecture, Development Environment, and Output
The platform adopts a client-server architecture. A remote cloud server manages user accounts, stores experiment data, simulation results, and facilitates instructor evaluation. The local client application, run in a web browser, contains the core simulation modules: data processing, 3D resource management, and system control.
| Aspect | Specification |
|---|---|
| Core Technologies | 3D Simulation, HTML5, WebGL |
| Development Tools | Unity 3D, Three.js library |
| Server Environment | CPU: 16-core, RAM: 16 GB, OS: Windows Server, Database: MySQL |
| Client Environment | Modern Web Browser with WebGL support |
| Key Output | Interactive 3D visualization of the drone light show, real-time parameter dashboards, collision risk reports, and flight efficiency metrics. |
The user interface is divided into a control panel and a 3D viewport. The control panel allows students to select/design formations, program flight and light sequences, initiate the simulation, and view analytical results. The 3D viewport renders the entire spectacle—the drones, their intricate flight paths, and the synchronized light patterns against a virtual backdrop—providing an immersive and直观 experience of the final drone light show.
Implementation Process and Educational Outcomes
This project utilizes drone swarms as a pedagogical载体, constructing a comprehensive simulation system for formation transformation via 3D modeling tools. It integrates multiple knowledge points involved in drone swarm operation. Through the platform, students experience the full development cycle of a real drone light show performance in a safe, virtual environment.
The interactive process allows students to freely manipulate the 3D space using mouse and keyboard to design formation patterns, plan lighting modes, and inspect飞行 parameters. They can programmatically control every aspect of the swarm using the simulated APIs. This virtual experimentation deepens their understanding of drone operational systems, formation control, and risk optimization.
The formation design and lighting choreography phases unleash students’ spatial imagination and creativity, with instant animation feedback reinforcing their understanding of swarm control logic. The multi-perspective, multi-scale 3D environment facilitates exploratory learning. The open-ended nature of the path optimization challenge promotes inquiry-based learning, significantly enhancing students’ comprehensive practical abilities.
The educational impact is measurable across several dimensions:
| Assessment Metric | Traditional Lecture-Based Approach | Virtual Simulation-Based Approach |
|---|---|---|
| Conceptual Understanding | Passive, abstract. Difficulty visualizing swarm dynamics and collision concepts. | Active, concrete. Direct manipulation and visualization solidify understanding of kinematics and coordination. |
| Practical Skill Acquisition | Minimal to none. No hands-on experience with control systems or APIs. | High. Students gain proficiency in using control interfaces, programming behaviors, and debugging in a realistic context. |
| Engagement & Motivation | Low. Theoretical content can be disengaging. | High. The gamified, creative process of designing a drone light show is highly motivating. |
| Safety & Accessibility | N/A for swarm experiments. | Full. Eliminates all physical risks and logistical barriers, allowing unlimited, asynchronous practice. |
| Interdisciplinary Application | Difficult to demonstrate. | Clear. Students from CS, engineering, design, and physics can see their domain knowledge applied to a unified AI project. |
In conclusion, this virtual simulation experiment for drone swarm formation transformation effectively addresses the critical gap between AI theory and multi-agent systems practice. By centering the learning experience around the captivating application of a drone light show, it transforms abstract principles of coordinated motion, path planning, and optimization into tangible, engaging challenges. It provides a scalable, safe, and powerful platform for cultivating the interdisciplinary skills and innovative thinking essential for the next generation of engineers and AI practitioners.
