Automated Runway Inspection with GreenSight and Machine Learning

Automated Runway Inspection with GreenSight and Machine Learning

At our recent trip to JIFX 19-3, we got a chance to take some great high resolution imagery of one of the runways on the base. We’ve been thinking about this application for our platform for some time as it is one that we often get asked about. We leveraged the machine learning stage of our data processing pipeline to generate some automated runway quality criteria, detecting cracks and other issues. This is mostly a proof of concept but something we could easily turn into a production reporting capability.

The objective of this experiment was to fly the GreenSight drone platform over a runway in an automated mapping flight, stitch the orthomosaic image and then process that image to detect cracks and classify their severity. A custom neural network architecture and training image dataset for crack detection were used to identify cracks. The runway gathered in this experiment (McMillan Airfield at Camp Roberts California Army National Guard Training Site) is used infrequently by larger aircraft and is in a moderate state of disrepair. As a result it has grass and weeds growing in some of the cracks. Much to our surprise, while our dataset did not contain any images with grass growing from cracks, the network handled those situations quite well. It should be noted that the classification algorithm had never seen the input image shown below prior to prediction.


Cropped regions of the McMillan airfield are shown below in the figures. After careful examination, it is difficult to find any missed cracks aside from some small cracks in the paint of the white markers and regions obscured by grass. It should be noted that grass-filled cracks could be added to the classifier from capturing additional imagery for use in a training dataset, which could be used for assessing suitability of an older unmaintained runway for adhoc use. The only false positives seem to be on the straight edges of painted regions, which in this image seemed to have shadowing/dark regions at the border, tricking the algorithm into a false positive prediction.

Use cases

Use cases for crack detection include airfield inspection and fault mapping of cracks, but could be extended easily for use on bridges, roads or any paved surface. When coupled with GreenSight’s unique change detection algorithms, changes from dataset to dataset can be tracked accurately to visualize crack growth vs time. This approach has the potential to eliminate the need for manual runway inspections as it is both faster and more quantitative than manual techniques, which means less runway downtime for inspections and more accuracy. The fact that this is a digital technique also means that data can be easily stored for later reference, or comparison against future data to track trends.

Performance and Coverage

These flights were flown at approximately 20m altitude. At this altitude, GreenSight’s Dreamer drone delivers around 5mm GSD and can cover around 18 acres per flight in ~55 minutes. Higher altitudes can be used for faster coverage at the expense of resolution. At 40m altitude, the system achieves around 1cm GSD and can cover ~90 acres. GreenSight’s stitching and machine learning pipeline takes around 1-2 hours to process a full size dataset, and can be delivered via the internet on the same day the flight is conducted. GreenSight has faster, higher resolution camera payloads in development which could be used to increase coverage speed in the future.

Future Work

Additional training images with vegetation and runway markings with and without cracks could improve detection performance, but current performance seems adequate for runway inspection comparable to human assessment. Developing a standard runway inspection report which can be generated automatically would be a good next step to deploy this capability for full scale inspection of runways.

Get in touch if your application could use this type of intelligence. We’d love to work with you to build out an automated analytic capability to fit your needs! Below you can see a number of close imagery samples from the experiment.