Geo Empower: Path to Leadership Event Scholarship is now open!

February 10-12, 2025  |  Colorado Convention Center   |  Denver, CO, USA

Session Details

Aevex Aerospace Lidar

Lidar Processing Advancements and Applications

Feb 12 2025

2:00 PM - 3:30 PM MT

Bluebird 2E

This session features presentations highlighting advancements in the collection, processing, accuracy, and real-world applications of lidar.

2:00 – 2:15 PM – Forestry using Next Generation Aerial Geiger-Mode Lidar
Next generation Geiger-mode lidar systems developed by 3DEO have enabled collecting high point densities over broad areas. Collecting with high angular diversity yields excellent foliage penetration; this has been particularly useful for forestry applications seeking to expand from plot extrapolation to full-forest analysis.
Brandon Call, 3DEO

2:15 – 2:30 PM – Lidar Point Cloud Colorization using Ray Tracing Techniques
Draping orthophotos over a lidar point cloud has been the traditional method for LAS file colorization for some time, however the drawback is that unless they are true orthophotos there will not be proper co-registration due to inherent lean of above ground features found in the orthophotos. Combining photogrammetry with ray tracing techniques it is possible to colorize a lidar point cloud more accurately when, in essence, the point cloud is used to orthorectify the imagery. Each lidar return is colorized by first selecting an image with the most similar look angle to the lidar pulse and then using ray tracing techniques to assign spectral values from that image frame using associated exterior orientation parameters and camera calibration information. Since photography does not have the light penetrating qualities of an active sensor like lidar , only lidar first returns can be expected to be accurately colorized. The result is a lidar point cloud with the red, green, blue and near infrared (if available) fields populated in the LAS file with far more accurate spectral assignments. If the point cloud is then rasterized the resulting image is comparable to true orthophotos in that all features have a nadir look angle which is critical for any subsequent segmentation analysis using the assigned spectral values.
Chris Miwa, NV5 Geospatial Inc.

2:30 – 2:45 PM – Assessing Lidar Accuracy Using 3D Targets – Recent Project Examples
Lidar accuracy assessment is typically done via classical methods inherited from photogrammetry. Vertical accuracy checking against the lidar surface at a known checkpoint (survey nail) is the most common approach in use today. Surface modelling of the lidar data is done using accepted Triangular Irregular Network (TIN) or Inverse Distance Weighted (IDW) methods. The orthogonal distance between the checkpoint and the lidar surface gives the vertical error. Using a collection of such checkpoints provides the statistical Root Mean Square Error (RMSE) in the vertical for the surface, assuming the checkpoints are well-distributed across the area. Horizontal positional accuracy is reported as the radial or planimetric (XY) accuracy achieved rather than as individual single-axis errors. Traditionally, lidar datasets have used identifiable targets in the point cloud for error measurement. These can be specific targets deployed during the survey flights, like photogrammetric panels, or targets of opportunity that have been surveyed, such as building corners, manhole covers, road markings etc. The vertical (Z) position of such targets is derived from the lidar surface. The planimetric (XY) position is collected manually in post-processing, but this is labor-intensive and prone to interpretation error. Automating both the vertical and horizontal accuracy checking using detection algorithms to identify and locate the targets reduces the labor required, is less prone to user error, eliminates errors of interpretation in target location, and allows for a more rigorous calculation of offsets and corrections to be applied to the point cloud. An algorithmic approach to target detection relies on using monumented Ground Control Targets (GCTs) that can be “seen” within the point cloud. Such targets can be 2-dimensional (XY) such as checkerboard or concentric targets on the ground or they can be 3-dimensional (XYZ) objects such as spheres or discs configured in a well-defined pattern and mounted above the ground. Color contrast, such as alternating black and white segments, or high-reflectivity paint is used to enhance the detectability in the point cloud. Translations to correct the point cloud can be automatically computed from a single 3D target, but when three or more 3D targets are deployed the algorithm can solve for a full 6-degree translation and rotation correction of the point cloud while quantifying both vertical (Z) and radial (XY) accuracy. The corrections can be saved for reference or automatically applied to the point cloud depending on the workflow. This presentation will cover recent examples from the field using various 3D targets for assessing lidar accuracy.
Martin Flood, GeoCue Group, Inc.

2:45 – 3:00 PM – Using Airborne Lidar Observations to Assess Relationships between Snowpack and Recent Wildfires for Water Utility Management in the Northern Front Range of Colorado
In recent years, wildfires in the western United States have increased in size, frequency, and severity. This trend is evident in Colorado, where a series of wildfires has raised concerns. The Cameron Peak and East Troublesome Fires, the two largest recorded wildfires in Colorado history, burned over 400,000 acres in 2020. Wildfires pose a significant threat to high-elevation ecosystems, especially in terms of disturbances to watershed resources. Colorado’s annual water supply is dependent upon the melt and runoff of high-elevation snowpack, thus understanding wildfire effects on snow dynamics is crucial. This project explored the impacts of fires on Front Range watersheds as well as water supply forecasts. We investigated the use of remote sensing in forecasting, as water management companies have previously relied on point observations of snowpack. This project utilized airborne lidar data from Airborne Snow Observatories, Inc., Landsat 8 Operational Land Imager (OLI) derived products, as well as in-situ snow survey data from Colorado State University to assess and model changes in snowpack over time. We uncovered key drivers of change in snowpack (burn severity and snow zone) and created a random forest model to examine their relationships. Despite temporal availability data limitations, analysis showed that remotely-sensed snow depth measurements were correlated with in-situ snow depth measurements, revealing the accuracy and feasibility of using remote sensing technologies to study landscape-scale snowpack characteristics for water utility management purposes.
Ryan Hondorp, NASA

3:00 – 3:15 PM – Cloud-Enabled Lidar Damage Assessment in Support of Overseas U.S. Air Base Operations
As geopolitical and climate risks proliferate across the world, overseas U.S. Air Bases are increasingly threatened by both foreign hostilities and natural disasters. Stitch3D has been working with the U.S. Air Force 374th Civil Engineer Squadron (374 CES) at Yokota Air Base, Japan to develop cloud-enabled lidar technology that can help expedite on-base damage assessment such as runway craters, earthquakes, floods, and building deterioration. Working with Inertial Labs, Stitch3D provided aerial lidar imagery to 374 CES Airmen and Civil Engineers. Using Stitch3D, 374 CES stakeholders were able to automatically extract priority assets, such as powerlines and fuel tanks, from the lidar point cloud, enabling airfield managers to rapidly assess damage to key infrastructure. Processing algorithms include feature classification (ground, low/medium/high vegetation, buildings, poles and wire, water, and specific assets determined by 374 CES), Digital Terrain Models and contour line generation, and coordinate reprojection to satellite maps. Armed with a cloud-enabled lidar data management and processing platform, 374 CES was able to improve airfield damage assessment from several days to a few hours. The key lessons from this project with the U.S. Air Force are the importance of a streamlined digital geospatial workflow, from processing large aerial lidar datasets to managing and sharing 3D insights with interested stakeholders. 374 CES’ successful employment of aerial lidar to perform a critical mission-related task – Rapid Airfield Damage Assessment – is being studied by the Air Force Civil Engineer Center (AFCEC) for emulation across other overseas Air Bases.
Clark Yuan, Stitch3D

3:15 – 3:30 PM – Bring AI-powered Lidar Data Processing In-House to Improve Efficiency
The volume of lidar data captured globally rapidly increases, and project deadlines are becoming shorter. To keep pace, data classification must be more accurate and efficient, and the requirements for data handling are becoming increasingly stringent. With advancements in artificial intelligence, adopting new technologies to meet project demands and stay ahead of the curve is more critical than ever. Leveraging AI can significantly enhance the efficiency and accuracy of lidar data processing. By integrating scalable AI processing capabilities that can be deployed on-premises or in the cloud, we offer flexible and scalable solutions that meet even the strictest requirements. This ensures peace of mind for lidar production managers and business owners, knowing their data processing is handled with the utmost precision and efficiency. In this presentation, we will explore practical business use cases where we have successfully deployed AI in-house for one of our clients, highlighting the benefits they have experienced. These real-world examples will demonstrate the tangible improvements AI can bring to lidar AR data processing. Join our talk to discover how you can incorporate AI-powered lidar data processing in-house to enhance efficiency, accuracy, and overall project outcomes.
Nejc Dougan, Flai

Featuring

GeoCue Group, Inc.

Stitch3D

© Diversified Communications. All rights reserved.