Project Summary

Abstract

Since inspection works of assets, such as bridges, roads, crops, power lines, and aircraft, are usually done manually and costly by humans, automatic asset inspection is an emerging technology that various corporations and government departments are interested in. We are working with Near Earth Autonomy (NEA) and Prof. David Held to develop systems for visual inspection of aircraft and power lines. Near Earth Autonomy has the vision of solving the problem of asset inspection in general with full automation using payloads containing various sensors and measurement devices like LiDAR, RGB Camera, GPS, IMU, mounted on UAVs. For power line inspection, we perform semantic segmentation of 3D point clouds to detect power line damages and interfering vegetation. We have developed a fully automated pipeline based on PointNet++ model, which has achieved high accuracy in both qualitative and quantitative experiments. For aircraft defect detection, we perform defect detection as a bounding-box regression problem. We have developed a fast one-stage and anchor-free defect detection model using CenterNet, which enables fast and online aircraft defect detection from images taken at the surface of the aircraft.

Asset Inspection Pipeline by Near Earth Autonomy

Detailed Information

Part (a): Power Line Inspection

Our customer needs to perform power line inspection automatically. Specifically, the requirements are

  1. Ability to map and georeference the following
    • Encroachment of vegetation on transmission lines
    • Height of trees, poles, circuits, and other visible assets
    • Imagery of assets to identify damage, degradation, and visible identifying marks
    • Health and condition of trees or other vegetation
    • Hot spots and locations on transmission/distribution circuits and equipment assets (breakers, arresters, crimps, etc.)
  2. Rapid post-event survey of assets for damage assessment

To meet the requirements, we aim to obtain nice 3D semantic segmentation results of interested regions scanned by LiDARs. We expect Near Earth Autonomy to first provide calibrated & merged 3D point clouds, from which we can directly perform segmentation. Based on the segmented results, the above requirements can be easily fulfilled. For example, to detect encroachment of vegetation on transmission lines, we can search along segmented power lines and see if any segmented trees are too close; to perform damage assessment of powerlines, we can check the continuity of segmented 3D power line points.

To perform 3D segmentation, we implemented and customized a PointNet++ model. The input is a patch of 3D points, and the output is the same patch of 3D points with the segmentation label of each point. The model architecture is shown below.

PointNet++ Architecture

In the training stage, we randomly sample patches of the point cloud as input to the model; in the inference stage, we fed all patches of the point cloud to perform segmentation, then merge them together.  The pipeline for 3D segmentation is shown below.

3D Segmentation Pipeline

Here is an example of our 3D semantic segmentation result, where power lines are shown in green, trees are shown in red, and backgrounds are shown in black. We finally achieved 97 ~ 98% training accuracy and  94 ~ 95% validation accuracy, which is high enough for the given dataset. For detailed experiment results, please refer to our blog posts.

3D Semantic Segmentation Example (PointNet++)

Note that we currently do not have nicely calibrated 2D image data. One may expect 2D image features to help boost our 3D segmentation results, yet our current pure 3D segmentation results can achieve fairly high accuracy. Therefore, the benefits of involving 2D image data might be marginal. In the future, when camera image calibration is finished,  we can also use the images corresponding to defect locations to perform a secondary assessment.

Part (b): Aircraft Inspection

This part is highly confidential as we cannot leak any information regarding the dataset. We will try to describe our model and results without any data.

The goal of the second part of the capstone project is to enable automatic inspection of aircraft conditions with drones flying around the surface of the aircraft. To do so, we need to detect the defects in the images photoed by drones. We are provided with a dataset of images photoed at the surfaces of the aircraft, with ground-truth bounding boxes of defects labeled.

As a direct approach to solving the problem, we perform object detection on the images. In favor of the speed and feasibility to train and evaluate the results online, we prefer to use one-stage object detection models instead of two-stage ones. Also considering the variant ratio of bounding box sizes, we prefer to use anchor-free models over anchor-based models. Therefore we choose CenterNet as our detection model. CenterNet is an upgraded version of CornerNet, which regresses centers and corners of objects in one stage. The model architecture is shown below.

CenterNet Architecture

However, during experiments, we found that sever overfitting effects exist, even though the training mAP reaches over 90%. The main reasons are (1) We are provided with too few data. We downloaded some similar images from online sources to assist training and applied multiple data augmentation techniques, yet the improvements are only marginal. (2) Many ground-truth labels are ill-posed when we use bounding boxes to represent defects that can be in various shapes. We do have very good prediction visually, but they may overlap less than the predefined threshold with the ground-truth bounding box. Despite the overfitting effect, our model still performs fare on detecting bounding boxes and performs well on detecting image-level defects.

In the future, one could possibly use weakly-supervised methods to perform segmentation instead of bounding box detections to reduce the ambiguity in the labels.