Application Technology of Lighting UAV in Smart Inspection of Nightscape Lighting

With the rapid development of urbanization and the nighttime economy, nightscape lighting projects have become a crucial component of urban landscapes and economic growth. Numerous lighting fixtures are installed in high or hard-to-reach locations, such as building facades, forming spectacular nightscape lighting systems. Simultaneously, the number of lighting fixtures in these systems is increasing rapidly, making them larger and more complex, which poses significant challenges for daily maintenance and management. Traditional manual inspection methods consume substantial human resources and often fail to detect faulty fixtures promptly, leading to slow maintenance responses, potential regional failures, increased maintenance costs, and reduced overall reliability of the lighting system. Therefore, there is an urgent need to introduce intelligent and automated technologies to establish remote self-inspection systems for nightscape lighting fixtures, enabling real-time monitoring and fault预警, thereby improving response speed, reducing costs, and ensuring lighting effects.

The introduction of a remote self-inspection system for nightscape lighting fixtures offers multiple benefits: First, it can quickly identify and locate faults or aging in fixtures, ensuring the integrity of nightscape displays. Second, compared to manual inspections, automated systems significantly enhance fault detection efficiency, ensuring timely responses and handling. Additionally, automated inspection systems eliminate the safety risks associated with高空作业 while reducing labor costs. Finally, the system can enable predictive maintenance based on fixture operating status, extending their lifespan.

In recent years, lighting UAV technology has gradually become the preferred solution for automation in various industries. In nightscape lighting inspection, lighting drones demonstrate immense potential. Based on comparative analysis, Table 1 illustrates the differences between traditional manual inspection and lighting UAV inspection across multiple dimensions.

Table 1: Comparative Analysis of Traditional Manual Inspection and Lighting UAV Inspection
Comparison Dimension Traditional Manual Inspection Lighting UAV Inspection
Operation Method Relies on on-site manual checks and observation tools, close proximity to objects, handwritten or portable device records. Autonomous flight data collection, remote control, equipped with high-definition cameras and sensors.
Efficiency and Coverage Slow inspection speed, limited coverage, difficulty accessing hazardous areas. Fast inspection speed, wide area coverage, easy access to高空 and hazardous zones.
Safety Safety risks from高空作业 or contact with high-voltage equipment, affected by weather. Reduced manual hazardous work, operators work safely from the ground.
Data Processing and Analysis Relies on manual processing, low efficiency, prone to errors, difficult real-time data collection. Automated processing with AI algorithms, real-time data transmission and analysis.
Cost-Effectiveness High labor costs, multiple inspection tool expenses. High initial investment, long-term reduction in labor and safety incident costs.
Environmental Impact On-site operations may disrupt equipment, high-pollution environments affect health. Minimal interference with equipment, capable of operating safely in polluted or hazardous environments.
Accuracy and Reliability Relies on experience, potential for偏差, difficulty detecting minor faults or隐蔽 areas. High-precision cameras and sensors detect细微 faults, comprehensive detection with image processing algorithms.

Lighting UAVs can efficiently cover large-scale structures, bridges, and other challenging areas, achieving comprehensive inspection of lighting fixtures and meeting the real-time monitoring needs of large systems. Moreover, advanced sensors equipped on lighting drones, such as infrared thermal imaging, allow in-depth analysis of fixture operating status, enabling timely detection and prediction of potential faults. Lighting UAV technology avoids the risks of personnel working at heights, significantly reducing overall maintenance costs.

In the current era of digital and intelligent transformation in traditional industries, this research serves as a critical window and opportunity for the digital-intelligent transition in the nightscape lighting sector. It also represents the technological落地 of low-altitude economy in the lighting industry, poised to become a new trend in industry development.

Related Technology Overview

Application of Lighting UAV Technology in Nightscape Lighting Inspection

As technology advances, lighting UAV inspection is increasingly applied in practical scenarios, with its efficiency widely validated. In nightscape lighting inspection, lighting drones offer unique advantages.

First, the high efficiency of lighting UAV巡航 brings convenience to fixture inspection. They can quickly perform large-scale flights, capturing high-definition images from multiple angles. Compared to traditional manual inspection, lighting drones not only improve efficiency but also significantly reduce labor costs, especially in areas with obstacles or complex terrain, where lighting UAVs can easily access, making inspections more comprehensive.

Second, lighting drones are equipped with advanced sensors and camera devices, enabling accurate recording of fixture status. These records include not only images but also detailed descriptions of faults, such as location and type, providing strong data support for troubleshooting. Combined with infrared thermal imaging technology, lighting UAVs can effectively predict hidden or early-stage faults.

However, lighting UAV technology still faces challenges in fixture inspection, such as ensuring flight safety, effectively planning inspection routes for large-scale lighting projects, and accurately identifying faulty fixtures. Addressing these issues requires further research and technological innovation.

Advances in Lighting Fixture Fault Detection Methods Based on Image Processing

In recent years, significant progress has been made in image processing for nightscape lighting fault detection. For instance, image segmentation and edge detection techniques effectively segment lighting fixture images; features such as shape, grayscale, and gradient are used to identify fault states; image correction and target detection technologies detect fault areas in images captured by fixed cameras. Although these methods improve detection accuracy, the high density of fixtures in urban nightscapes and environmental factors like lighting changes and weather conditions still affect the accuracy of image segmentation and target detection. Additionally, the decrease in detection accuracy due to positional deviations in images collected by lighting UAVs must be addressed.

Nightscape Lighting Inspection Method Combining Lighting UAV and Image Processing Technology

Integrating the efficient image acquisition capabilities of lighting drones with AI-based image processing, we propose a multi-level inspection method for nightscape lighting faults. Key steps include:

  1. Lighting UAV Image Acquisition: Use lighting drones equipped with high-resolution cameras to capture multi-angle images of nightscape lighting areas. Combined with various sensors, such as infrared and thermal imaging devices, lighting UAVs can also capture temperature distributions of fixtures, adding dimensions and accuracy to fault detection.
  2. Image Preprocessing: Perform preprocessing on collected images, including denoising, contrast adjustment, histogram equalization, and Gaussian filtering, to provide clear images for subsequent analysis. Advanced image segmentation techniques, such as semantic segmentation in deep learning, are used to accurately identify lighting fixture regions.
  3. Lighting Fixture Target Detection: Use deep learning target detection algorithms, such as the Regions with Convolutional Neural Network (R-CNN) series or You Only Look Once (YOLO) series, to identify and detect faults in lighting images. Additionally, by integrating features like shape and texture, faulty fixtures can be more precisely identified and located.
  4. Fault Analysis and Alert: Based on detection results, generate detailed fault analysis reports and enable remote data access and processing via cloud services. Through instant alert systems, managers receive fault notifications immediately, greatly improving maintenance efficiency and response speed.

Automatic Inspection and Fault Analysis System for Nightscape Lighting Based on Lighting UAV

Overall System Design

Overall Framework and Components

The architecture and components of the automatic inspection platform for nightscape lighting based on lighting UAV technology are shown in Figure 1. The system is primarily built from four parts: front-end, network, cloud platform, and back-end. To meet the automatic inspection needs of nightscape lighting, we introduce advanced lighting UAV technology.

The main components and their functions are as follows:

  • Lighting UAV Platform: This system employs a stable lighting UAV platform capable of autonomous flight along predetermined routes. Through its API interface, it can acquire high-definition video images and geographic coordinates in real-time. This open data interface also supports issuing control commands, such as adjusting attitude, speed, and position, ensuring precise execution of complex inspection tasks and achieving high automation and intelligence throughout the inspection process, significantly enhancing the efficiency and accuracy of nightscape lighting inspection.
  • Camera Equipment: The lighting drone is equipped with dual-light high-definition camera devices and a 3-axis mechanical gimbal (pitch, roll, pan) for capturing lighting fixture images. The camera’s resolution and quality must be sufficiently high to ensure clear, detailed images for accurate subsequent image processing.
  • Data Transmission System: Establish an efficient and stable data transmission system for data transfer between the lighting UAV and ground control station. Ensure timely transmission of captured lighting images and related data.
  • Ground Control Station: As the system’s center, the ground control station receives, processes, and analyzes lighting image data captured by the lighting UAV. It requires image processing and fault analysis algorithms, computational power, and data storage capabilities.
  • Image Processing Algorithms: Run image processing algorithms on the ground control station for preprocessing, target detection, and fault analysis of captured lighting images. Employ the latest image processing and machine vision algorithms, such as deep learning and target detection algorithms, to achieve automated lighting fault analysis and diagnosis.
  • Database: Establish a dedicated database to store lighting image data, fixture information, and fault records, facilitating troubleshooting and maintenance.
  • Human-Machine Interface: The ground control station provides a user-friendly interface for operators to monitor the lighting UAV’s flight status, specify inspection tasks, and view fixture fault information.
  • Fault Analysis and Alert Module: The ground control station generates fault reports based on image processing algorithm results. For critical faults, the system triggers alert mechanisms to promptly notify maintenance personnel for action.

Lighting UAV Automatic Inspection Process

Intelligent Inspection and Fault Response Process:

This system integrates cutting-edge lighting UAV technology, high-resolution dual-light cameras (visible light + infrared light), advanced image processing software, and high-speed 4G/5G communication networks to achieve a highly联动 and responsive lighting inspection mechanism. The cloud platform handles automated scheduling, instructing the lighting UAV to take off from its nest at scheduled times and execute inspection tasks along preset routes. In nighttime or low-light environments, the dual-light cameras on the lighting drone can accurately capture the operating status of fixtures. Upon detecting faulty or abnormally operating fixtures, the onboard intelligent image analysis software immediately processes the captured image data to identify potential issues. Subsequently, the lighting UAV uses its 4G/5G communication module to feed back the exact location and status of faulty fixtures to the inspection cloud platform in real-time. The cloud platform, with its efficient data processing capabilities, quickly categorizes and analyzes fault information, sending fault reports to the maintenance team. This process not only improves maintenance response speed but also significantly saves labor and time costs during maintenance, enhancing overall efficiency. Additionally, the automatically recorded inspection data supports long-term operational management of urban lighting systems. Specific processes are shown in Figures 2 and 3.

Image Recognition and Fault Analysis Algorithm Process:

As shown in the system framework in Figure 4, the lighting UAV automatically takes off and executes inspection tasks based on predefined routes, collecting image information of lighting fixtures in real-time. These real-time collected images are matched and compared with pre-stored complete lighting effect image sets in the database. By employing image fusion technology based on channel attention mechanisms, the system generates feature maps that integrate information from the images to be detected and complete lighting effect images. This is a key step in providing input data for deep learning target detection algorithms (e.g., R-CNN, YOLO). These algorithms comprehensively process information in the feature maps, accurately identify and mark the locations of faulty fixtures, and generate detailed fault reports with confidence scores. This series of processes ensures the accuracy of detection results and significantly improves the speed and efficiency of fault handling.

Lighting UAV Inspection Path Planning and Control Methods

Path planning and control of lighting UAVs play a crucial role in ensuring the efficiency and safety of night lighting inspection systems. This section delves into the theoretical basis, key technologies, and implementation methods of path planning and control techniques for lighting UAVs in night lighting inspection tasks.

Path Planning Methods

  1. High-Precision Division of Inspection Areas: Combining Geographic Information Systems (GIS) and Computer-Aided Design (CAD) tools, this system performs high-precision geographic block analysis and division, achieving systematic management of inspection areas and efficient planning of lighting UAV inspection routes. Furthermore, this precise spatial data processing provides detailed map support for the lighting UAV’s navigation system.
  2. Multi-Variable Optimization Route Planning Algorithm: By integrating Multi-Objective Genetic Algorithm (MOGA) and Improved Particle Swarm Optimization (IPSO) algorithms, the system can optimize route design within a multi-dimensional parameter space. This algorithm comprehensively considers parameters such as terrain, obstacle height, risk assessment, and lighting UAV power system performance, optimizing routes while ensuring comprehensive inspection coverage and minimized flight risk.
  3. Adaptive Waypoint Positioning Mechanism: For routes generated by optimization algorithms, the system automatically determines waypoint intervals and positions through an adaptive waypoint positioning mechanism, ensuring effective coverage of each fixture to be inspected. This mechanism considers the lighting UAV’s field of view (FOV), the optical characteristics of the payload camera, and the geographic distribution of target fixtures, guaranteeing monitoring efficiency and accuracy.
  4. Real-Time Dynamic Path Planning System: This system integrates real-time data transmission and environmental perception technologies of lighting UAVs, creating a self-adjusting dynamic path planning system. It can monitor and analyze environmental factors in real-time, including weather changes and unpredictable no-fly zones, quickly making path adjustments to ensure continuous execution of inspection tasks and flight safety.

Control Methods

  1. Multi-Sensor Fusion Flight Control System: To ensure high autonomy and precise control of the lighting UAV, we employ a multi-sensor fusion technology that integrates Micro-Electro-Mechanical Systems (MEMS) accelerometers, gyroscopes, magnetometers, and Differential Global Positioning System (DGPS), providing more accurate positioning and flight stability.
  2. High-Speed Data Link and Intelligent Waypoint Navigation Technology: Through the latest wireless communication standards, such as Long-Term Evolution (LTE) or Fifth Generation Mobile Communication Technology (5G), we ensure low-latency and high-reliability data links between the ground control station and the lighting UAV. This data link provides real-time route commands and environmental data for the intelligent waypoint navigation system, enabling the lighting UAV to perform highly automated flight along preset routes.
  3. Comprehensive Flight Safety Monitoring System: We integrate LiDAR and advanced computer vision technologies, implementing real-time 3D obstacle detection and classification through onboard sensor and external environmental sensor data fusion, enhancing the decision-making capability of the obstacle avoidance system.
  4. Autonomous Emergency Handling Protocol: To address potential emergencies during lighting UAV flight, we design an autonomous emergency handling protocol. This protocol can activate autonomous return or emergency landing procedures immediately upon detecting critical system failures or after task completion, maximizing the safety of equipment and personnel.

In summary, the lighting UAV inspection path planning and control technologies proposed in this study not only enhance the automation and intelligence level of nightscape lighting system maintenance but also significantly improve inspection efficiency and reduce operational risks.

Image Processing and Recognition

Image Acquisition and Processing

Acquisition of Lighting Effect Images:

In nightscape lighting fault detection, lighting effect images are a key source of information. The lighting UAV flies along predetermined routes, capturing images with its high-definition camera when it reaches specified detection areas. These images are mainly divided into two categories: one is the complete lighting effect image set after construction completion, serving as a standard reference; the other is the real-time lighting effect images obtained during daily inspections, used for comparison with standard images. To ensure the accuracy of algorithm training, we use high-resolution, low-noise images. Simultaneously, to enhance the model’s recognition capability under various conditions, image enhancement techniques such as random rotation, cropping, and color adjustment are applied during training.

Building Sample Sets:

Sample set construction is the foundation of machine learning and deep learning training. We use the labelimg tool for image annotation due to its user-friendly interface and rapid, effective annotation capabilities. Fault areas are marked with rectangular boxes, where (x, y) represents the center point coordinates of the box, and (w, h) represent width and height, respectively, as shown in Figure 5. Sample data is divided into three parts: training set, validation set, and test set, in an 8:1:1 ratio, to ensure effective evaluation of model performance at each stage.

Lighting Fixture Fault Detection Algorithm

Proposal and Background of Image Fusion Algorithm:

In the fault detection process for nightscape lighting, due to positional deviations in lighting UAV positioning, images captured each time are not in the same location, leading to errors in comparisons between images to be detected and standard complete lighting effect images. Based on this, we propose an image fusion algorithm based on a channel attention mechanism. This mechanism assigns different weights to different complete lighting effect images. Through training, it ensures that the complete lighting effect image most matching the image to be detected receives the highest weight during image fusion, making subsequent fault detection more accurate and robust.

Image Fusion Algorithm Based on Channel Attention Mechanism:

The core idea of the channel attention mechanism is to assign a weight to each channel, thereby highlighting features most relevant to the image to be detected and improving image matching accuracy. The process is shown in Figure 6. It mainly includes:

  1. Initialization and Image Concatenation: First, the complete lighting effect image set (e.g., A1, A2, …, An) is concatenated with the image to be detected along the channel dimension. This operation lays the foundation for subsequent feature extraction and weight calculation.
  2. Feature Extraction and Weight Generation: The concatenated images undergo convolution and down-sampling operations to extract key image features. These features are transformed into an n-dimensional weight vector (e.g., α1, α2, …, αn) through fully connected layers, where each weight represents the matching degree between the image to be detected and the corresponding image in the lighting effect image set.
  3. Image Weighted Fusion: Each lighting effect image is weighted and fused according to its corresponding weight, generating the fused image Aα:
    $$ A_{\alpha} = \sum_{i} \alpha_i A_i $$
  4. Re-concatenation of Fused Image and Image to Be Detected: To further strengthen the information of the image to be detected and prevent excessive loss of its features, the fused image Aα is concatenated again with the image to be detected along the channel dimension, generating a more comprehensive lighting effect fusion feature map.

Deep Learning Target Detection Algorithm:

Through the image fusion algorithm, we fuse the complete lighting effect images and the images to be detected into a lighting effect fusion feature map. To determine the fault areas of lighting fixtures, it is only necessary to perform target detection on this fusion feature map. Adaptive deep learning target detection algorithms can efficiently accomplish this detection task. Target detection algorithms are mainly divided into two-stage and one-stage types. Two-stage algorithms, such as the R-CNN series, first extract potential target regions and then classify these regions; whereas one-stage algorithms, such as the YOLO series, simultaneously determine target positions and classifications. Although two-stage algorithms have higher accuracy, their detection speed is slightly slower; conversely, one-stage algorithms detect quickly but may sacrifice some accuracy. Given the characteristics and requirements of this task, we choose the one-stage YOLO algorithm as the foundation.

For the lighting effect fusion feature map, the overall structural framework of the YOLO algorithm is shown in Figure 7. First, we scale the lighting effect fusion feature map of size W × H to L × L. Then, the image is divided into S × S grids (here, S is chosen based on our experimental data to balance accuracy and speed). Each grid obtains its features through a series of convolution and down-sampling operations, and then outputs a vector of length S × S × B × 5 through fully connected layers. Here, B represents the number of predicted bounding boxes per grid, and 5 represents the four coordinates and confidence of the bounding box. This vector directly provides the location information of the fault areas we need. To remove overlapping prediction results, we use the Non-Maximum Suppression (NMS) method, which effectively removes redundant bounding boxes, ensuring only the most representative predictions are retained.

The model is trained using the training set, training completion is determined via the validation set, and model performance is evaluated using the test set. Evaluation metrics include mean Average Precision (mAP), accuracy, and recall rate, among others, to ensure the model has good generalization capability.

Real-Time Fault Identification

After successfully completing model training and evaluation, the next step is to deploy it as a real-time fault identification system. The core steps to achieve real-time fault identification are as follows:

  1. Data Preprocessing: Perform preliminary processing on images captured by real-time monitoring devices (e.g., cameras) to ensure they have the same format and dimensions as the training dataset. This step includes:
    • Denoising: Especially crucial in low-light environments or with poor-quality cameras, specific denoising algorithms can be used for processing.
    • Color Balance: Correct color deviations in images captured by cameras to ensure image color authenticity.
    • Scaling: Resize images to the dimensions accepted by the model, ensuring the model can accurately process input data.
  2. Image Fusion: Perform feature fusion on preprocessed images using the image fusion algorithm to obtain lighting effect fusion images with better feature representation.
  3. Fault Detection: Input the lighting effect fusion images into the trained target detection model. The model predicts whether a fault has occurred, marks the fault areas, and provides associated confidence levels. To ensure identification accuracy, the model returns multiple fault areas with high confidence after redundancy removal.

Through these steps, we can identify and mark fault areas on real-time images, providing accurate information for subsequent repairs or other actions.

Result Feedback

When the real-time fault detection system identifies a fault, timely and effective information feedback is essential. Key steps for result feedback include:

  1. Real-Time Alerts: When a fault is detected, the system issues audible or visual alerts and can consider other notification methods, such as vibration alerts or mobile push notifications, to ensure attention is drawn in various environments.
  2. Fault Logging: All detected faults are automatically recorded in a dedicated log, detailing the time, location, and corresponding image or video evidence, ensuring subsequent analysis and回溯.
  3. Data Analysis: In-depth analysis of fault logs can reveal common patterns or trends of faults. This analysis not only helps prevent future faults but also aids in smarter resource and budget allocation, formulating efficient maintenance plans.
  4. Remote Access: The real-time fault identification system should connect to cloud databases or monitoring centers, allowing authorized users to view fault records and alerts from any location. Simultaneously, system security and user privacy protection should be fully ensured in the design.
  5. Maintenance Response: The system can communicate with maintenance teams in real-time, whether via email, phone, or other instant messaging tools, automatically notifying maintenance personnel for necessary repairs or replacements, ensuring lighting equipment恢复正常 operation as soon as possible.

Conclusion

This study proposes an automatic inspection and fault analysis system for nightscape lighting that combines lighting UAV technology with deep learning algorithms. This system can accomplish the tasks of fault detection and maintenance for lighting fixtures in nightscape lighting. Compared to traditional manual inspection methods, this system has significant advantages in the field of nightscape lighting maintenance, providing faster and more accurate detection means to help shorten maintenance cycles and reduce costs. In the increasingly large-scale and high-quality development of night lighting, the proposal and development of this system are of great importance and have broad application prospects.

Scroll to Top