Geo Empower: Path to Leadership Event Scholarship is now open!

February 10-12, 2025  |  Colorado Convention Center   |  Denver, CO, USA

Session Details

Aevex Aerospace Lidar

Unique Sensor Applications

Feb 12 2025

11:00 AM - 12:30 PM MT

Bluebird 3C

This session will feature academic-based presentations focused on unique classification approaches from a wide-array of sensors.

11:00 – 11:15 AM – ASPRS Bathy Working Group – Validating Bathy Lidar
A summary of the work done by the Bathy Working Group on the best practices for validating bathy lidar will be presented. Sensor specific hardware, software, and algoritms in a range of environments will be shared by the Joint Airborne Lidar Bathymetry Technical Center of Expertise and others from the working group as it relates to accuracy in ranging in water all framed from the accuracy challenges left to us from the godfather of bathy lidar, Gary Guenther. Please join our session then bring questions and solutions to the Bathy Work Group’s meeting to follow the presentation.
Nicholas Johnson, USACE Joint Airborne Lidar Bathymetry Technical Center of Expertise

11:15 – 11:30 AM – Limitations of Bathymetric Lidar and Technology Developments to Overcome
Bathy lidar is one of the fastest growing segments in the airborne survey market. However, the data collection efficiency, data density and data accuracy are far behind what is feasible for topographic lidar capture. Costs are also significant higher. A typical topographic lidar capture at 4 points per m2 can be collected at about 10 times higher flight efficiency compared to a bathymetric lidar capture at similar data density. This is due to several limitations of bathymetric LiDlidar AR caused by the water environment as for example. – Signal losses are exponential with water depth. To achieve reasonably depth penetration energy per laser pulse needs to be higher, optical apertures needs to be larger and /or lower flying heights needs to be used, highly reducing the collection efficiency. Further, eye safety regulations highly limit the allowed laser energy exposure. – Speed and direction of the laser beam changes in the water surface interface. Local water surface height needs to be accurately measured for refraction correction and the effect of water surface waves needs to be considered in the survey uncertainty. – Laser beam is highly affected by water volume scattering causing further uncertainty and limitations for small objects detection. The main technical and operational differences between topographic and bathymetric lidar will be presented, with the purpose of educating the market of both the benefits and limitations of technology. The presentation will also include information about technologies developed to overcome the limitations, maximizing the collection efficiency, data accuracy, object detection capability and data density. Correctly used, bathymetric lidar is a fantastic tool for efficient collection of high-fidelity data in coastal zones and inland water environments. Let’s use it with the right expectations.
Anders Ekelund, Hexagon – Geospatial Content Solutions Division

11:30 – 11:45 AM – Satellite Derived Bathymetry (SDB) using Sentinel 2 MSI Based-On Physics-Based Algorithm
Although active optical sensors (sonar and lidar) measure bathymetry with high accuracy and precision, the coverage is usually limited due to the high operational cost. Satellite imagery has a great advantage over the active sensors if high precision and accuracy of the satellite derived bathymetry (SDB) can be achieved. This research proposes a fully physics-based bathymetry algorithm by combining atmospheric and ocean optics. Top of the atmosphere reflectance from a satellite image over coastal zone must be corrected for the atmospheric contribution. Handling Rayleigh scattering due to the atmospheric molecules and the aerosol attenuation are the critical components, especially for short wavelength bands. The amount of correction is determined using estimated aerosol optical depth, ozone, water vapor, and surface pressure to extract the radiative transfer look up table value, along with solar and sensor angles. A dark-pixel approach utilizes strong water absorption in the near infrared band. We handle the potential deviation from the dark-pixel assumption using optical modeling of the water based on chlorophyll, dissolved organic matter and suspended particulate. Ocean optics theory forms a foundation for SDB and it consists of the water volume backscattering component and bottom reflected component. The magnitude of the bottom reflected component follows the exponential function of the optical depth. The optical depth is computed from diffuse attenuation spectrum and the depth. We estimate the water attenuation spectrum from optically deep water. The bottom reflectance is modeled as a mixture of typical sand-like and grass-like bottom reflectance spectra. Thus, two bottom coefficients and the depth are three unknowns, and it is solved from nonlinear optimization. This physics-based approach does not require known depths if proper bottom spectra are used based on an educated guess. With known depths, the accuracy and precision will be dramatically improved. Fast, low-cost, and vast areal bathymetry coverage using SDB is a powerful complementary technique to active sensors.  We demonstrate the accuracy and precision of SDB using various algorithms to optimize the estimated local bottom spectra. The Landsat OLI images and Sentinel 2 MSI images are used for demonstration.
Minsu Kim, KBR

11:45 -12:00 PM – Emerging Satellite Hyperspectral and Aerial Multispectral Nighttime Imaging Technologies and Applications
The emergence of nighttime hyperspectral satellite imagery has offered advanced capabilities for various applications. This presentation will focus on the emerging applications of night imagery, especially the nighttime hyperspectral imaging capabilities of the DLR Earth Sensing Imaging Spectrometer (DESIS) using DESIS and the Hexagon MFC150 multispectral camera. DESIS is hosted on the International Space Station (ISS) and provides 30 m GSD high spectral resolution (~3 nm) data from 400-1000 nm over 235 bands. The Hexagon MFC150 camera uses mechanical Forward Motion Compensation (FMC) to ensure high spatial resolution without linear smear during long exposures, even at high flying speeds. The MFC150 records multispectral data in Red (580 – 660 nm), Green (480 – 590 nm), Blue (420 – 510 nm) and/or NIR (720 – 850 nm). The Leica CityMapper-2 and CountryMapper are a new class of hybrid systems that incorporate the MFC150, demonstrating high-resolution imaging and lidar data integration in a single system and improving the accuracy and efficiency of large-area mapping projects. Nighttime imaging with both the MFC150 and DESIS has extensive applications. The MCF150 and Leica’s hybrid systems enable rapid assessment of areas by detecting fine-scale changes in light patterns alongside co-collected, high-density lidar point clouds. These light surveys improve urban planning and public safety by identifying poorly lit areas. While not originally designed for nighttime imaging, DESIS’ high spectral resolution allows for precise identification and characterization of multiple light sources. When DESIS or other hyperspectral sensor data are integrated with high spatial resolution multispectral data, the combined dataset allows light emissions to be examined in greater detail. High spatial resolution aerial imaging and lidar provide contextual and geolocation information that is not achievable with current satellites. This additional aerial data information can be used to improve the accuracy of the geolocation of satellite data. This fused data has many applications, including disaster response, safety light surveys, light pollution mitigation, and energy efficiency improvements. This presentation will provide an overview of these technologies and their transformational impact on nighttime remote sensing applications.
Robert Ryan, Innovative Imaging & Research

12:00 – 12:15 PM – Improved Satellite Derived Bathymetry Leveraging High Revisit Rate SmallSat Constellations
Bathymetric datasets generated from multispectral satellite imagery, referred to as satellite derived bathymetry (SDB) are gaining widespread use for hydrographic survey planning, coral reef habitat mapping, and a range of other science uses. The primary limiting factor in SDB is water clarity: when the water is too turbid for the seafloor to register a detectable contribution to the received signal at the imaging sensor, bathymetry retrieval is impossible. Importantly, water clarity is highly temporally variable: it can change drastically on time scales ranging from minutes to tide cycles, to seasons, to years (or longer). Drivers of water clarity include wind speed, wave height, local hydrodynamics, substrate type, storms, and runoff, among other factors. For this reason, recent constellations of SmallSats, such as Planet Labs’ SuperDove, that provide very short revisit cycles (e.g., daily imagery) may greatly facilitate successful bathymetry retrieval by providing many more opportunities to acquire imagery at a time of suitable water clarity. We tested how SDB created using SuperDove imagery compares to SDB created from Sentinel-2 imagery that has a longer, five-day revisit cycle, and to independent bathymetric reference data. Preliminary results show that SuperDove’s daily revisit cycle increases the probability of obtaining imagery with higher water clarity and is significant in creating higher accuracy SDB than satellites with a longer revisit cycle. These results suggest that SmallSat constellations may become increasingly useful for SDB.
Ruth McCullough, Oregon State University

12:15 – 12:30 PM – Making the most of Lidar and Photogrammetry with UAS
Small Unmanned Aerial Systems (UAS) are increasingly being marketed and sold with combined lidar and photogrammetric capabilities. However, the fusion of these two methodologies presents several technical and operational challenges that can impact the quality, utility, and most importantly, the credibility of the resulting spatial data. In this presentation, we explore some of the primary issues encountered during their integration, as well as novel methodologies to correct for them. They are the following: Sensor Calibration and Alignment: Effective data fusion depends on precise calibration and alignment of lidar, camera and PNT (positioning, navigation and timing) systems, and integration with any ground control data that may be present. Discrepancies in the calibration of photogrammetric cameras and lidar sensors can lead to significant errors in the final composite 2D and 3D datasets, including 3D point clouds, textured 3D meshes, and 2D orthomosaics. The most visible evidence of these errors can seen in geometric and radiometric offsets between intensity data and point cloud RGB data colorized via the camera. The magnitude of these errors can be large (easily visible to the human eye) or small, but are universally present in every UAS system that we’ve tested so far. Ensuring that the geometric and radiometric properties are consistently aligned is crucial, yet challenging, as it requires rigorous field and software calibration procedures. We present a novel workflow that corrects for these data misalignments and unifies the the geometric and radiometric differences in the .las data, and illustrate its utility with several case studies. Spatial and Temporal Resolution Disparities: Photogrammetry and  systems often differ in their spatial and temporal resolutions. For example, it is a relatively common practice to fly over a project area with lidar-only UAS, then subsequently fly the same site with a seperate camera-only UAS.  Integrating these disparate datasets requires careful handling to avoid resolution conflicts that can degrade the output 3D meshes completeness and accuracy. Depending on reflectivity of certain surfaces, there may be missing areas in the lidar point cloud that create undesirable gaps or geometric inconsistency in the resulting 3D mesh generated from imagery. We explore the process and benefits of integrating lidar with photogrammetric techniques to create hybrid 3D meshes, and illustrate how to overcome spatial resolution discrepencies to create high quality deliverables, with a particular focus on textured 3D meshes. Geospatial practicioners integrating lidar with photogrammetry should come away with a better understanding of how to use hybrid methods to produce superior 3D geospatial data that is robust in difficult and diverse conditions, versatile in application, and high in detail and accuracy. We focus on practical use cases that should have great value to attendees of Geo Week.
Benjamin Vander Jagt, PixElement  

Featuring

Hexagon – Geospatial Content Solutions Division

USACE Joint Airborne Lidar Bathymetry Technical Center of Expertise

KBR

Oregon State University

Innovative Imaging & Research

© Diversified Communications. All rights reserved.