Resolution enhancement optimizations for hyperspectral and multispectral synthetic image fusion

2012 ◽  
Author(s):  
Charles R. Bostater
Author(s):  
M. A. Lebedev ◽  
D. G. Stepaniants ◽  
D. V. Komarov ◽  
O. V. Vygolov ◽  
Yu. V. Vizilter ◽  
...  

The paper addresses a promising visualization concept related to combination of sensor and synthetic images in order to enhance situation awareness of a pilot during an aircraft landing. A real-time algorithm for a fusion of a sensor image, acquired by an onboard camera, and a synthetic 3D image of the external view, generated in an onboard computer, is proposed. The pixel correspondence between the sensor and the synthetic images is obtained by an exterior orientation of a "virtual" camera using runway points as a geospatial reference. The runway points are detected by the Projective Hough Transform, which idea is to project the edge map onto a horizontal plane in the object space (the runway plane) and then to calculate intensity projections of edge pixels on different directions of intensity gradient. The performed experiments on simulated images show that on a base glide path the algorithm provides image fusion with pixel accuracy, even in the case of significant navigation errors.


Photonics ◽  
2021 ◽  
Vol 8 (10) ◽  
pp. 454
Author(s):  
Yuru Huang ◽  
Yikun Liu ◽  
Haishan Liu ◽  
Yuyang Shui ◽  
Guanwen Zhao ◽  
...  

Image fusion and reconstruction from muldti-images taken by distributed or mobile cameras need accurate calibration to avoid image mismatching. This calibration process becomes difficult in fog when no clear nearby reference is available. In this work, the fusion of multi-view images taken in fog by two cameras fixed on a moving platform is realized. The positions and aiming directions of the cameras are determined by taking a close visible object as a reference. One camera with a large field of view (FOV) is applied to acquire images of a short-distance object which is still visible in fog. This reference is then adopted to the calibration of the camera system to determine the positions and pointing directions at each viewpoint. The extrinsic parameter matrices are obtained with these data, which are applied for the image fusion of distant images captured by another camera beyond visibility. The experimental verification was carried out in a fog chamber and the technique is shown to be valid for imaging reconstruction in fog without a prior in-plane. The synthetic image, accumulated and averaged by ten-view images, is shown to perform potential applicability for fog removal. The enhanced structure similarity is discussed and compared in detail with conventional single-view defogging techniques.


2018 ◽  
Vol 26 (26) ◽  
pp. 34805 ◽  
Author(s):  
Jian Wang ◽  
Rong Su ◽  
Richard Leach ◽  
Wenlong Lu ◽  
Liping Zhou ◽  
...  

2020 ◽  
Vol 12 (16) ◽  
pp. 2595
Author(s):  
Fuqun Zhou ◽  
Detang Zhong ◽  
Rihana Peiman

Time-series for medium spatial resolution satellite imagery are a valuable resource for environmental assessment and monitoring at regional and local scales. Sentinel-2 satellites from the European Space Agency (ESA) feature a multispectral instrument (MSI) with 13 spectral bands and spatial resolutions from 10 m to 60 m, offering a revisit range from 5 days at the equator to a daily approach of the poles. Since their launch, the Sentinel-2 MSI image time-series from satellites have been used widely in various environmental studies. However, the values of Sentinel-2 image time-series have not been fully realized and their usage is impeded by cloud contamination on images, especially in cloudy regions. To increase cloud-free image availability and usage of the time-series, this study attempted to reconstruct a Sentinel-2 cloud-free image time-series using an extended spatiotemporal image fusion approach. First, a spatiotemporal image fusion model was applied to predict synthetic Sentinel-2 images when clear-sky images were not available. Second, the cloudy and cloud shadow pixels of the cloud contaminated images were identified based on analysis of the differences of the synthetic and observation image pairs. Third, the cloudy and cloud shadow pixels were replaced by the corresponding pixels of its synthetic image. Lastly, the pixels from the synthetic image were radiometrically calibrated to the observation image via a normalization process. With these processes, we can reconstruct a full length cloud-free Sentinel-2 MSI image time-series to maximize the values of observation information by keeping observed cloud-free pixels and calibrating the synthetized images by using the observed cloud-free pixels as references for better quality.


Author(s):  
P. Mahanti ◽  
M. S. Robinson ◽  
H. Sato ◽  
A. Awumah ◽  
M. Henriksen

Image fusion, a popular method for resolution enhancement in Earth-based remote sensing studies involves the integration of geometric (sharpness) detail of a high-resolution panchromatic (Pan) image and the spectral information of a lower resolution multi-spectral (MS) image. Image fusion with planetary images is not as widespread as with terrestrial studies, although successful application of image fusion can lead to the generation of higher resolution MS image data. A comprehensive comparison of six image fusion algorithms in the context of lunar images is presented in this work. Performance of these algorithms is compared by visual inspection of the high-resolution multi-spectral products, derived products such as band-to-band ratio and composite images, and performance metrics with an emphasis on spectral content preservation. Enhanced MS images of the lunar surface can enable new science and maximize the science return for current and future missions.


Sign in / Sign up

Export Citation Format

Share Document