February 10-12, 2025  |  Colorado Convention Center   |  Denver, CO, USA

Search
Close this search box.
Session Details

Aevex Aerospace Lidar

Applications of Photogrammetry

Feb 13 2024

2:00 PM - 3:30 PM MT

Mile High Ballroom 2B

Presentations in this session will show how advanced photogrammetric techniques are applied to imagery acquired with a variety of optical sensors to produce high resolution/high-accuracy mapping and information products.

 

Oriented Imagery – A schema to manage and disseminate sensor imagery

Peter Becker, Esri

 

Automated Rectification of Geo-Inaccuracy in Multi-Orbit Commercial Satellite Imagery for Updating DOQQ and Multi-Resolution/Sensor Data Fusion   

Geolocation accuracy can be measured by creating networks of groundtruth points and measuring the difference between images.  MAXAR WV imagery’s geolocation accuracy is 3.5m (CE90).  We demonstrate that this level of accuracy is degraded to >100m by terrain elevation and off-nadir differences using imagery from Oakland, CA. On flat terrain like Syracuse, NY, the offset is <12m. We identify three sources of geo-inaccuracy: #1 the terrain effect, #2 the off-nadir imaging difference effect, and #3 the residual effect.  We first isolate #1 terrain effect plus #3 residual error by a novel orthorectification, leaving #2 as the difference between Total and (#1+#3).  We demonstrate ability to georectify using high resolution based aerial DOQQ applied to satellite WV imagery to improve groundpoint location offset to near zero, defined as (1) zero drift of >40% covering the entire image, (2) offset > 0 < 1.2m covering additional 50%, and (3) leaving <10% area for change detection. If the input pair is DOQQ and WV imagery, the output WV image has equivalent geoaccuracy as the DOQQ. For verification, we used the aligned WV imagery to replace DOQQ as the Base, yielding an output that is virtually identical to the earlier output.  Thus, we can use the aligned WV imagery to update DOQQ, and perform multi-resolution, multi-sensor data fusion.

Shin-yi Hsu, Susquehanna Resources and Environment, Inc.

 

Photogrammetry Aspects of the Emerging Modality Independent Raster Image (MIRI) Standard

We introduce the Modality Independent Raster Image (MIRI) as a next generation sensor agnostic data and processing standard for Electro-Optic imagery that provides a standard means to move forward and backward along the image processing chain and allows for access to multiple intermediate products within a single dataset.  We show how MIRI is analogous to the Sensor Independent Complex Data (SICD) and Sensor Independent Derived Data (SIDD) standards, which had been developed for SAR image products; however, MIRI has more flexibility and complexity built into the variety of offered processing levels and geometric modeling of sub-components that comprise an imaging system.

We proceed by summarizing each of the key components of the MIRI standard, which include a software reference implementation, an Application Programming Interface (API), a Unified Modeling Language (UML)-based Logical Data Model (LDM), and associated MIRI documentation.

Finally, we focus on the National Geospatial-Intelligence Agency’s (NGA’s) Generic Linear Array Scanner (GLAS) and Generic Frame-sequence Model (GFM) projection models that MIRI leverages.  As GLAS and GFM comply to the Community Sensor Model (CSM) API, we address how these models will be upgraded to work most efficiently with MIRI. We will articulate typical geospatial exploitation scenarios for projection modeling, using the same generic model with metadata populated differently, from products at different processing levels.

Hank Theiss, University of Arkansas – CAST, & KBR

 

Accelerating Digital Twins through airborne Reality Capture

Digital Twins are implementing Digital Transformation for organizations – often an organizational development challenge where technology acts as support. In the journey of developing geospatial data towards a predictive environment that assists in daily decision making, teams need to commit to share data among each other and find a common purpose.  Reality capture – especially from airborne sensors for cities and sites, can help to create a foundation as photorealistic 3D data layer. Overlaying this with domain data (e.g. from utilties, building information layers or any other geospatial type), makes expert content accessible to non-experts and decision makers across domains. Through this, Reality Capture becomes a key enabler for Digital Transformation with location as connecting property. Within the presentation, the evolution of GIS towards Digital Twin enabling technologies for organizational change, holistic understanding and operational intelligence will be demonstrated. The role of airborne Reality capture to enable automated foundational content will be demonstrated in examples from public organizations and private businesses. The talk will be concluded with the role of the industry ecosystem in terms of partnership to enable sustainable change.

Konrad Wenzel, Esri

 

Computer Assisted Dimensional Analysis through Perspective (CA-DAP): A Challenging Maritime Accident Reconstruction Case Study

Terrestrial photogrammetrists require a stereo pair of photographs to derive dimensions of a scene. Computer assisted dimensional analysis through perspective (CA-DAP) can be used with one photograph of items with known geometries to create a validated 3D computer model. In this presentation, we illustrate the use of CA-DAP using photographs and GPS data to recreate a virtual maritime accident scene. The underwater portion of the accident scene, which was not shown in any photographs, was able to be virtually revisited to observe the interaction of an anchor chain and a propeller. In this particular case, we were tasked with answering the question: Were the propeller blades pitched forward or reverse when the propeller contacted the anchor chain?

Hugh Bartlett, Bartlett Engineering LLC & Julia Guidry Bates, Bartlett Engineering LLC

 

Modelling Declassified Keyhole Imagery as a Generic Linear Array Scanner

We introduce imagery from the Corona and Hexagon satellite platforms, which were part of the US Keyhole reconnaissance program. These film images were captured between 1960 and 1984, have since been declassified, and are now distributed by USGS. With GSD as small as 0.3 meters, these historical images are useful for a variety of purposes, but are not straightforward to use in standard image exploitation workflows.

We describe our work, which was funded by the NGA as part of the OpenKeyhole program, which makes these images readily usable. The work is comprised of three parts: one, understanding and reverse-engineering the sensors in question, using image-level techniques to capture relevant fiducial marks; two, novel use of the Generic Linear Array Sensor (GLAS) model for the sensors; and three, using estimated model parameters to produce more readily-exploitable images.

Finally, we describe the accuracy users can expect while working with declassified images, and how the use of GLAS has improved that accuracy. We also briefly describe tools that we created and distribute that allow any user of these images to replicate our process.

Seth Warn, University of Arkansas – CAST

Featuring

Bartlett Engineering LLC

Bartlett Engineering LLC

Susquehanna Resources and Environment, Inc.

University of Arkansas - CAST, & KBR

University of Arkansas - CAST

© Diversified Communications. All rights reserved.