Atmospheric compensation of long-wave infrared (LWIR) hyperspectral imagery is investigated in this article using set representations learned by a neural network. This approach relies on synthetic at-sensor radiance data derived from collected radiosondes and a diverse database of measured emissivity spectra sampled at a range of surface temperatures. The network loss function relies on LWIR radiative transfer equations to update model parameters. Atmospheric predictions are made on a set of diverse pixels extracted from the scene, without knowledge of blackbody pixels or pixel temperatures. The network architecture utilizes permutation-invariant layers to predict a set representation, similar to the work performed in point cloud classification. When applied to collected hyperspectral image data, this method shows comparable performance to Fast Line-of-Sight Atmospheric Analysis of Hypercubes-Infrared (FLAASH-IR), using an automated pixel selection approach. Additionally, inference time is significantly reduced compared to FLAASH-IR with predictions made on average in 0.24 s on a 128 pixel by 5000 pixel data cube using a mobile graphics card. This computational speed-up on a low-power platform results in an autonomous atmospheric compensation method effective for real-time, onboard use, while only requiring a diversity of materials in the scene.
CITATION STYLE
Westing, N., Gross, K. C., Borghetti, B. J., Martin, J., & Meola, J. (2020). Learning Set Representations for LWIR In-Scene Atmospheric Compensation. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13, 1438–1449. https://doi.org/10.1109/JSTARS.2020.2980750
Mendeley helps you to discover research relevant for your work.