February 10-12, 2025  |  Colorado Convention Center   |  Denver, CO, USA

Close this search box.
Session Details

Aevex Aerospace Lidar

Machine and Deep Learning Applications using Remotely Sensed Data

Feb 12 2024

2:00 PM - 3:30 PM MT

Mile High Ballroom 2B

Presentations in this session will showcase the use of artificial intelligence, machine- and deep-learning, to automate classification and extraction of features from remotely sensed imagery and lidar point cloud data.

How to analyze ever-growing aerial datasets with custom and scalable Machine Learning 

Image Analysts and Decision Makers are under increasing pressure to derive actionable and accurate information from an ever-increase deluge of available data sources.  As Electro Optical data capture costs decrease and availability increases, new data tools are needed to rapidly exploit this data at scale and decrease costs for analysis.  Artificial Intelligence will get you there, but typical Machine Learned (ML) Data Models take extensive amounts of training data, software expertise, and exquisite Ground Truth for accurate development.  We believe anyone should be able to build their own customized ML data models and classify their own data to their specific purposes with minimal training and experience.  Democratizing AI/ML solutions for imagery analysis builds a community of users able to share their expertise and rapidly advance other applications for aerial data.  This improvement in the accuracy and timeliness of analyzed products will drive demand for more services and unlock other sensor technologies that expand the utility of Electro Optical imagery to solve complex challenges.

We take the audience through the complexities and advancements in data science to solve these challenges.  We cover principles and techniques to building clever ML Data Models that expand the users’ perspective on what objects, features, and conditions can be modeled with the right data at the right resolutions.  Finally, we cover how AI/ML is a tool and not a replacement for Image Analysts.

Tim Haynie, Spectrabotics, LLC


Strenghts and Limitations of AI-Based Techniques for Large-Scale LiDAR Classification

While researchers and developers have explored the utilization of deep learning (DL) techniques for tasks such as LiDAR data classification and object detection, implementing these methods at a large scale presents formidable challenges, which is primarily due to the complex integration of artificial intelligence (AI) into production workflows and the need to obtain industry-standard data.

We will present our AI-based workflow that successfully achieved precise airborne LiDAR data classification for the City of Vancouver and Merritt in British Columbia, Canada. The dataset was collected in 2022, covering a total area of approximately 280 Km2, and our approach employed DL techniques to classify buildings and vegetation. Initially, the model was trained using a high-quality dataset from the City of Nanaimo collected in 2016. Our unique methodology was to separate some of the classes for model training and, subsequently, fine-tune the model using a small portion of the newly acquired dataset. This process resulted in more accurate data classification and significantly faster processing.

The successful application of our AI-based workflow in LiDAR data classification for a complex and extensive area demonstrates the effectiveness of DL techniques in handling large-scale projects. While DL methods prove effective in various tasks, it’s important to note that they may not be the optimal choice for every scenario, particularly when it comes to ground classification.

Dr. Azadeh Koohzare, McElhanney Ltd.


Comparative Analysis of Deep Learning-based Object Detection Models in Remote Sensing Images

Object detection plays a crucial role in image interpretation and understanding. The advent of deep learning-based methods has significantly advanced this field. However, the distinctive characteristics of remote sensing images, including large directional variations, scale differences, and complex and cluttered backgrounds, pose considerable challenges for accurate target detection. In this work, we compare the detection accuracy and processing speed of several state-of-the-art models by detecting palm trees in optical satellite imagery. This work aims to explore how these models, adopted in many remote sensing applications, perform when applied to detect objects in overhead satellite images. Several models are selected from the single-stage and two-stage object detection families of techniques. Additionally, we use the timing results of the sliding window object detector to establish a baseline to compare different approaches. Our experiments demonstrate that two-stage detectors perform better in remote sensing contexts when detecting small, crowded objects, outperforming their single-stage counterparts. Future work includes extending this analysis to additional models, such as the multi-stage object detection family.        

Caixia Wang, University of Alaska Anchorage


Differential Deep Learning Approaches for a Fused Output

In this talk, we will look at how to apply deep learning techniques to different geospatial data types, and how to leverage shared information in fused datasets to assist deep learning algorithms.  We will review the basics of deep learning, and how it interacts with different geospatial data types. Drawing from previous work of ours and work done in collaboration with clients, we will look at examples of how to exploit the strength of deep learning on one type of data to supplement the weaknesses of another, and how to utilize a fused dataset to generate a more complete classification for your data. The focus will be on the utilization of both imagery (panoramic, planar, and aerial) and point cloud data, and how developing robust deep learning classification models for both data types in a fused (geospatially referenced and aligned) dataset allows for a more comprehensive classification pipeline. In this talk we will examine the relative strengths and weaknesses of both imagery and point cloud data. This relative analysis will then be used to exemplify areas where the two data types complement each other, and how combined deep learning approaches allow the two data types to support each other. The combination of data types and DL models allow for a more complete classification, as well as more advanced means of extracting additional information from both the imagery and point cloud.

David Jarron, Solv3D Inc.

DORSL-FIN: A Self-Supervised Neural Network for Recovering Missing Bathymetry from ICESat-2

Bathymetric data, comprising elevations of submerged surfaces (e.g., seafloor or lake bed), constitute a critical need for a wide range of science and application focus areas, such as safety of marine navigation, benthic habitat mapping, flood inundation modeling, and coastal engineering. Over the past decade, the availability of nearshore bathymetric data has increased dramatically due to advances in satellite-derived bathymetry (SDB). One notable advance occurred with the 2018 launch of NASA’s Ice, Cloud, and land Elevation Satellite 2 (ICESat-2), carrying the Advanced Topographic Laser Altimeter System (ATLAS). However, much like other Earth observing satellites, ATLAS is often hampered by obstructions, such as clouds, which block the sensor’s view of the Earth’s surface. In this study, we introduce the Deep Occlusion Recovery of Satellite Lidar From ICESat-2 Network (DORSL-FIN) to recover partially occluded bathymetric profiles. We show that DORSL-FIN is able to accurately recover occluded bathymetry and outperforms other methods of interpolation.

Forest Cocoran, Oregon State University

Deep Learning for Classification of Large-Scale 3D Point Clouds

LiDAR and image-based remote sensing technologies are widely used to capture entire landscapes, cities and infrastructure networks. Classification, interpretation, and information extraction are essential processing steps for preparing the captured data for a variety of geo-spatial applications in areas such as urban planning, environmental monitoring, and disaster management.

In this talk, we show the potential of artificial intelligence for geospatial applications and demonstrate how deep learning can be used to efficiently and reliably classify large 3D point clouds. We present deep learning techniques that can be applied to highly detailed 3D point clouds, allowing to detect arbitrary objects and structures within. We demonstrate the practicability of the proposed techniques based on several case studies using mobile mapping data from road networks and terrestrial scans from structures and facilities. Using a modular, configurable processing chain and a small training dataset, a variety of objects (e.g., street furniture, trees) and components (e.g., construction parts) can be detected in the raw data. The talk we will give insights into the technologies used along this processing chain.

The results show that a deep learning-based classification opens up new ways to automatically process and analyze large-scale 3D point clouds as required by a growing number of applications and systems. The modular processing approach allows to scale across different hardware setups.

Rico Richter, University of Potsdam


Oregon State University

Spectrabotics, LLC

Solv3D Inc.

McElhanney Ltd.

University of Potsdam, Digital Engineering Faculty

University of Alaska Anchorage

© Diversified Communications. All rights reserved.