February 10-12, 2025  |  Colorado Convention Center   |  Denver, CO, USA

Close this search box.
Session Details

Aevex Aerospace Lidar

Remote Sensing Image Processing and Classification Techniques

Feb 13 2024

11:00 AM - 12:30 PM MT

Mile High Ballroom 2A

Experts in the field of image analysis and classification will present applications of single and fused data sets for mapping and monitoring of trees, crops, and flooding impacts.

Fusion of Optical, Radar and Waveform LiDAR Observations for Land Cover Classification: A Review and a Case Study

Land cover is an integral component for characterizing anthropogenic activity and promoting sustainable land use. Mapping distribution and coverage of land cover at broad spatiotemporal scales largely relies on classification of remotely sensed data. Although recently multi-source data fusion has been playing an increasingly active role in land cover classification, our intensive review of current studies shows that the integration of optical, SAR and LiDAR observations has not been thoroughly evaluated. In this research, we bridged this gap by i) summarizing related fusion studies and assessing their reported accuracy improvements, and ii) conducting our own case study where for the first-time fusion of optical, radar and waveform LiDAR observations and the associated improvements in classification accuracy are assessed using data collected by spaceborne or appropriately simulated platforms in the LiDAR case. Multitemporal Landsat-5/TM and ALOS-1/PALSAR imagery acquired in the Central New York region close to the collection of airborne waveform LVIS (Land, Vegetation, and Ice Sensor) data were examined. Results indicate that the combined spectral, scattering, and vertical structural information provided the maximum discriminative capability among different land cover types, giving rise to the highest overall accuracy. Greater improvement was achieved when combining multitemporal Landsat images with LVIS-derived canopy height metrics as opposed to PALSAR features.

Huiran Jin, New Jersey Institute of Technology


An Analytical Comparison of UAS Lidar vs. UAS Imagery for Characterizing New England Forests

Lidar data were collected using an unpiloted aerial system (DJI Matrice 300) with a DJI P3 lidar sensor over selected forest stands in New Hampshire.  Natural color imagery was also collected using a UAS (AgEagle eBee X) with a Sensor Optimized for Data Acquisition (SODA) camera of the same forest stands. Recent field inventory characterizing these forest stands were available as part of a Continuous Forest Inventory (CFI) project at the University of New Hampshire.  The goal of our research project was to determine the following forest characteristics: tree height, tree diameter, tree basal area, number of trees per stand, average stand height, and average stand basal area from both the lidar data and the SODA imagery.  These remote sensing results were then compared to the field inventory to determine if either was an effective substitute for these intensive and expensive field data collections.  The results show that while tree height tends to be underestimated using either the lidar or SODA imagery, neither was statistically, significantly different than the ground inventory.  Tree diameter and basal area also tended to be overestimated with the lidar producing significantly better results than the SODA.  In conclusion, this research showed the potential of using remotely sensed data (lidar and/or visible imagery) for collecting important information necessary to manage our complex forests in New England.

Russell  Congalton, University of New Hampshire


Tree Inventory Using Lidar, Imagery, and Automation

The purpose of the project was to locate, identify, and calculate the Diameter Breast Height (DBH) for every tree within a 525 square mile proposed development in Sarasota, FL. The tree locations and identifications were used to aid developers in proper land clearing practices within the AOI to maintain Southwestern Florida’s natural feel.

To achieve this task, Dewberry co-acquired lidar and RGB imagery. Lidar data was acquired at approximately 200 ppsm using a Riegl VQ-1560 II-S sensor mounted in a Cessna Caravan aircraft. The project was flown with overlapping, perpendicular lines to ensure sufficient penetration through the tree canopy. 14 lines were acquired in ~1 hour of flight time. The co-acquired RGB imagery was collected using a 150 megapixel integrated camera and met a ground sample distance of 5.0 cm.

Using a semi-automated approach, the RGB imagery was fused with the lidar and leveraged in the location and classification of each tree. Additionally, to verify the tree species, stereo imagery was compiled to provide 3d views. The DBH estimations were made based on a power function equation that relates tree height and DBH based on a species-specific scaling parameter. The species specific scaling parameter was determined by calculating the average difference between the field collected DBH value with the predicted DBH value for the scaling parameter. Standard deviation and range between predicted and actual DBH were also considered.

Meagan Anderson, Dewberry


Remote Sensing Applications to Support Large-scale Riverine and Floodplain Assessment and Monitoring

Remote sensing and spatial analytics have substantial utility to support riverine and floodplain assessment and monitoring at extents not feasible with traditional field surveys.  This presentation will provide an overview of relevant technologies such as topobathymetric lidar, sonar, and multiple imagery types, as well as processes for integrating and analyzing these data.  Broad-scale, objective, and reproducible analytics allow for geographic and temporal comparison across entire river systems to aid in inundation modeling, restoration prioritization, efficacy monitoring, and more.  Quantification and mapping of geomorphic features, thermal refugia, floodplain connectivity, riparian vegetation, solar exposure, and water quality are some of the applications we will review.  While this presentation is focused mainly on river systems, many of the concepts and data products can be applied similarly to other benthic systems such as oceans or lakes.  The goal of this presentation is to provide managers and decision makers information on how to leverage the concept of digital twins in natural systems through remote sensing technologies, data fusion, and analytics.             

Mischa Hey, NV5 Geospatial


Cover Crop Detection Using Object-based Classification: The Case of the 2021-2022 Winter in Delaware

Nowadays, cover crops are one of the most important agricultural best management practices. Cover crop area has been estimated traditionally using survey methods, but remote sensing can be used as a more time- and cost-effective assessment of cover crops. Although the object-based image analysis (OBIA) is popular remote sensing technique nowadays, the application of OBIA for cover crop detection has been limited. Therefore, object-based classification was applied to estimate the spatial distribution of winter cover crop use across the entire state of Delaware. In many cases of remote sensing study, OBIA has been conducted using fee-based commercial software. However, this causes financial burden to many organizations (e.g., non-profit organizations). To reduce the financial burdens of fee-based software products, we formalized the workflow with open-source remote sensing software and publicly available imagery (Sentinel-2 constellation images). In this study, cover crops were defined as any vegetation planted or surviving during winter on field crop areas. Therefore, the cover crop area estimated in this study was far more extensive than the traditionally surveyed area of cover crop, which had a narrower definition. Applying this methodology across Delaware, the total cover crop was estimated between 12/26/21 and 04/30/22. The overall accuracy was higher than 85% and Khat statistics were above 75% in all cases.      

Jae Sung Kim, Department of Civil, Environmental, and Geospatial Engineering, Michigan Technological University


An Ensemble Approach to Global Flood Severity Forecasting and Alerting in Near Real-Time

Flooding is a frequent disaster that impacts every country worldwide and contributes to significant societal and financial losses. While flooding is prevalent, disaster managers in developing countries still continue to face challenges in undertaking preparedness, response, and recovery efforts as well as in developing mitigation strategies. A major reason for this challenge is the difficulty in accessing information ahead of time and/or in near real-time about flood severity, flood locations, and extent that is crucial for resource planning and management. To address this gap, in the NASA funded Global Flood Forecasting and Alerting (GIFFT) project, an ensemble approach known as Model of Models (MoM) was developed to forecast flood severity (i.e., probability of flood risk) on a daily basis globally at sub-watershed level.

The MoM incorporates flood outputs derived from hydrologic and hydraulic models and Earth observation data sets (optical and Synthetic Aperture Radar imageries). The forecasted severity is classified based on probability of risk, and alert messages pertaining to high severity flood events and their impact areas are disseminated using the Pacific Disaster Center’s DisasterAWARE® platform to users globally. In this presentation, we will discuss the current state of MoM, its accuracy, and use of this model and products by the United Nations and WFP to support emergency response activities.

Bandana Kar, U.S. Department of Energy



University of New Hampshire

NV5 Geospatial, LLC

New Jersey Institute of Technology

U.S. Department of Energy

Michigan Technological University

© Diversified Communications. All rights reserved.