February 10-12, 2025  |  Colorado Convention Center   |  Denver, CO, USA

Close this search box.
Session Details

Aevex Aerospace Lidar

Academic Hub – Photogrammetry and Lidar Data Processing

Feb 13 2024

11:00 AM - 12:30 PM MT

Academic Hub

Advanced photogrammetric techniques are used for processing lidar and UAS imagery in a variety of conditions. Processing techniques for these data are discussed in mobile mapping, infrastructure, and natural habitats.

Mapping Vulnerability of Kemp’s Ridley Sea Turtle Nesting Habitats on Padre Island National Seashore, Texas Using a Miniaturized Mobile Lidar System

The Kemp’s Ridley (Lepidochelys Kempii) is the worlds most endangered sea turtle species. Padre Island National Seashore (PAIS), which houses the second largest Kemp’s Ridley nesting beach, is vulnerable to increased storm intensity, storm surge, inundation, erosion, relative sea level rise (RSLR), and daily/seasonal impacts. It is crucial that nesting beaches are closely monitored, and conservation approaches are developed for habitat survival. The goals of this research are to map vulnerability of Kemp’s Ridley nesting habitats by surveying two areas with confirmed nests on PAIS over a nesting season: a pedestrian only beach and a driving-permitted beach. These sites will be analyzed to determine short-term changes over the season and then compared to historic airborne lidar system (ALS) data to determine long-term changes of the beach and foredune. Additionally, differences in the pedestrian-only and driving-permitted beaches will be assessed.

The mobile lidar system (MLS) used in this study is comprised of a Velodyne HDL-32E lidar and a NovAtel position and orientation system (POS). Monthly surveys took place from June to September 2022. Analyses will first assess changes in beach geomorphology and seasonal variability using time series. Second, ALS data fusion will assess changes in beach geomorphology over a longer time period. Lastly, a coastal vulnerability index (CVI) will be built to characterize and map alongshore vulnerability of the surveyed habitats.

Isabel Garcia, Texas A&M University – Corpus Christi


Flexible Trajectory Estimation for Mobile Mapping with a Terrestrial Laser Scanner              

Kinematic laser scanning is a proven surveying technique for efficient acquisition 3D data. Laser scanning is based on laser ranging (LiDAR) together with a scanning mechanism. The raw measurements, range and angles, need to be transformed into a coordinate system referenced to the scanner, and then into a well-defined earth-referenced coordinate system. This requires knowledge of the scanner’s trajectory in respect to that coordinate system; therefore kinematic laser scanning systems typically comprise navigation systems. We present a novel integration method for fusion of measurement data from GNSS, IMU and LiDAR, with the goal of obtaining accurate and precise pointclouds  and, if necessary, system calibration. In contrast to the standard Kalman filter, applied to GNSS and IMU data alone, this method is based on joint batch adjustment of all measurement data. This adjustment approach is highly flexible, allowing for various applications, system configurations and sensor classes. The kinematic data may be combined with static laser scanning data, either from the same platform in a stop-and-go manner, or from a different acquisition. In both cases the datasets are rigorously co-registered, optimizing both the trajectory of the scanner as well as the static scan positions and if necessary, scanner calibration. Here, we present a kinematic mapping workflow using a terrestrial laser scanner, which may be mounted on a variety of moving platforms: car, bike, backpack, boat, etc.

Florian Pöppl, Technische Universität Wien


Refraction Correction for Spectrally-Derived Bathymetry from UAS Imagery in Coral Reef Restoration Sites

As part of a five-year mapping campaign to assess changes in coral reef restoration sites, a NOAA, Oregon State University (OSU), University of New Hampshire (UNH), and Mote Marine Laboratory research team is evaluating uncrewed aircraft systems (UAS) for seafloor mapping in shallow, clear-water areas. One option for bathymetric mapping from UAS is through structure from motion (SfM) photogrammetry. However, this technique requires sufficient seafloor texture for image matching. Another option is spectrally derived bathymetry (SBD), using algorithms adapted from those for mapping bathymetry from multispectral satellite imagery, based on spectral attenuation with path length through the water column. Because UAS have much smaller fields of view (FOVs) than satellite sensors, achieving high-accuracy UAS SDB requires applying refraction correction to account for the variable slant range through the water column off nadir. In this study, we investigate a UAS SDB refraction correction implemented as a simple extension of the well-known, widely used Stumpf algorithm. The procedure was tested on single orthophotos from coral restoration sites in the Florida Keys, and the resulting bathymetric estimates exhibited a statistically significant improvement in accuracy. This process is now being tested on a sitewide orthomosaic. The work is anticipated to create an additional tool for assessing bathymetry in shallow water environments.

Selina Lambert, Oregon State University


A Hybrid 2D-3D  Zero-shot Framework for Photogrammetric Data Segmentation  

Applying 3D Semantic segmentation in real-world applications (e.g., Scan to BIM, digital twins, etc.) has always been challenging due to the lack of well-annotated training data. To this end, we proposed a hybrid 2D-3D zero-shot photogrammetric data segmentation framework that can achieve satisfactory performance while segmenting any task-specific object classes. In our proposed framework, Grounding DINO and Segment-Anything Model (SAM) were deployed to implement zero-shot object detection and semantic segmentation in the 2D domain. Note that, since Grounding DINO and SAM were not trained on many images collected by UAVs, directly segmenting the aerial images may reduce the models’ performance. Thus, we used the reconstructed 3D meshes and rendered 2D images slightly above the ground elevation as the input to our framework. To further improve the precision rate before the 2D-3D back-projection process, we used Contrastive Language-Image Pre-training (CLIP) model acting as an additional resource to verify the segmentation result. A voting mechanism was finally utilized to project the segmentation result from the 2D images to the 3D data. By applying zero-shot techniques, we not only avoid the costly data annotation process but also added the capability of segmenting any objects of interest (e.g., blue trucks and square windows) for specific applications without reducing the segmentation performance.

Jiuyi Xu, University of Southern California


LSAP: Lidar Synchronous Acquisition and Processing
What is it? The automatic LiDAR solution, LSAP (LiDAR Synchronous Acquisition and Processing), by LidarSwiss is the next step towards totally automated LiDAR data acquisition and processing. LSAP provides deliverable data at landing – a superior, accurate and effortless solution to everybody who wants 3D data. It frees people from tedious steps in generating RGB-attributed LiDAR data. With LSAP, anybody can produce professional quality RGB LAS files at the end of every flight mission.

Who is using it? Surveyors, engineers, geologists, GIS professionals, powerline inspectors, highway designers, forestry analysists, land mapping professionals, urban planners, environmental scientists, etc. How is it making an impact? LSAP produces accurate data delivery at the speed of acquisition. It creates a low technology entry barrier for newcomers, requires minimum manpower in the field, and simplifies LiDAR data workflow. LSAP is most beneficial for people with large projects in infrastructure construction due to its automation of data processing. The invention of LSAP will greatly increase the efficiency and the economic value of projects in various industries. Simply put, LSAP benefits everybody in the LiDAR industry.

Robert Kletzli, LidarSwiss Solutions GmbH


Unmanned Fixed-wing High Density Airborne Lidar System Evaluation
This presentation steps through the evaluation of a newly available unmanned fixed-wing high density airborne lidar system following industry recognized best practices for aerial mapping. Using a wide area mapping project example, we will discuss capabilities and limitations, achievable vertical accuracy and the proper processing procedures required to produce final products. The process will begin with a review of the flight plans created to ensure successful acquisition of the entire area of interest. We will discuss the considerations made with regards to the selection of aerial targeting styles and the methodology used to establish control. Next, we will clearly outline the steps taken during processing pertaining to QA/QC as well as calibration of the lidar swath-to-swath and ultimately to control to remove vertical bias. Last, we will discuss the results of a comprehensive vertical accuracy assessment using Non-Vegetated Vertical Accuracy (NVA) and Vegetated Vertical Accuracy (VVA) checkpoints following the ASPRS Positional Accuracy Standards (2014).

Ravi Soneja and Justin Ohlschlager, Mckim and Creed. 


Texas A&M University - Corpus Christi

LidarSwiss Solutions

Oregon State University

McKim & Creed

Technische Universität Wien

McKim & Creed

University of Southern California

© Diversified Communications. All rights reserved.