scholarly journals Optical SAR Images Fusion: Comparative Analysis of Resulting Images Data

2018 ◽  
Vol 215 ◽  
pp. 01002
Author(s):  
Yuhendra ◽  
Minarni

Image fusion is a useful tool for integrating low spatial resolution multispectral (MS) images with a high spatial resolution panchromatic (PAN) image, thus producing a high resolution multispectral image for better understanding of the observed earth surface. A main proposed the research were the effectiveness of different image fusion methods while filtering methods added to speckle suppression in synthetic aperture radar (SAR) images. The quality assessment of the filtering fused image implemented by statistical parameter namely mean, standard deviation, bias, universal index quality image (UIQI) and root mean squared error (RMSE). In order to test the robustness of the image quality, either speckle noise (Gamma map filter) is intentionally added to the fused image. When comparing and testing result, Gram Scmidth (GS) methods have shown better results for good colour reproduction, as compared with high pass filtering (HPF). And the other hands, GS, and wavelet intensity hue saturation (W-IHS) have shown the preserving good colour with original image for Landsat TM data.

2011 ◽  
Vol 255-260 ◽  
pp. 2072-2076
Author(s):  
Yi Yong Han ◽  
Jun Ju Zhang ◽  
Ben Kang Chang ◽  
Yi Hui Yuan ◽  
Hui Xu

Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we present a new approach using structural similarity index for assessing quality in image fusion. The advantages of our measures are that they do not require a reference image and can be easily computed. Numerous simulations demonstrate that our measures are conform to subjective evaluations and can be able to assess different image fusion methods.


2021 ◽  
Author(s):  
Anuyogam Venkataraman

With the increasing utilization of X-ray Computed Tomography (CT) in medical diagnosis, obtaining higher quality image with lower exposure to radiation is a highly challenging task in image processing. Sparse representation based image fusion is one of the sought after fusion techniques among the current researchers. A novel image fusion algorithm based on focused vector detection is proposed in this thesis. Firstly, the initial fused vector is acquired by combining common and innovative sparse components of multi-dosage ensemble using Joint Sparse PCA fusion method utilizing an overcomplete dictionary trained using high dose images of the same region of interest from different patients. And then, the strongly focused vector is obtained by determining the pixels of low dose and medium dose vectors which have high similarity with the pixels of the initial fused vector using certain quantitative metrics. Final fused image is obtained by denoising and simultaneously integrating the strongly focused vector, initial fused vector and source image vectors in joint sparse domain thereby preserving the edges and other critical information needed for diagnosis. This thesis demonstrates the effectiveness of the proposed algorithms when experimented on different images and the qualitative and quantitative results are compared with some of the widely used image fusion methods.


2020 ◽  
Vol 12 (6) ◽  
pp. 1009
Author(s):  
Xiaoxiao Feng ◽  
Luxiao He ◽  
Qimin Cheng ◽  
Xiaoyi Long ◽  
Yuxin Yuan

Hyperspectral (HS) images usually have high spectral resolution and low spatial resolution (LSR). However, multispectral (MS) images have high spatial resolution (HSR) and low spectral resolution. HS–MS image fusion technology can combine both advantages, which is beneficial for accurate feature classification. Nevertheless, heterogeneous sensors always have temporal differences between LSR-HS and HSR-MS images in the real cases, which means that the classical fusion methods cannot get effective results. For this problem, we present a fusion method via spectral unmixing and image mask. Considering the difference between the two images, we firstly extracted the endmembers and their corresponding positions from the invariant regions of LSR-HS images. Then we can get the endmembers of HSR-MS images based on the theory that HSR-MS images and LSR-HS images are the spectral and spatial degradation from HSR-HS images, respectively. The fusion image is obtained by two result matrices. Series experimental results on simulated and real datasets substantiated the effectiveness of our method both quantitatively and visually.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5317 ◽  
Author(s):  
Moonyoung Kwon ◽  
Sangjun Han ◽  
Kiwoong Kim ◽  
Sung Chan Jun

Electroencephalography (EEG) has relatively poor spatial resolution and may yield incorrect brain dynamics and distort topography; thus, high-density EEG systems are necessary for better analysis. Conventional methods have been proposed to solve these problems, however, they depend on parameters or brain models that are not simple to address. Therefore, new approaches are necessary to enhance EEG spatial resolution while maintaining its data properties. In this work, we investigated the super-resolution (SR) technique using deep convolutional neural networks (CNN) with simulated EEG data with white Gaussian and real brain noises, and experimental EEG data obtained during an auditory evoked potential task. SR EEG simulated data with white Gaussian noise or brain noise demonstrated a lower mean squared error and higher correlations with sensor information, and detected sources even more clearly than did low resolution (LR) EEG. In addition, experimental SR data also demonstrated far smaller errors for N1 and P2 components, and yielded reasonable localized sources, while LR data did not. We verified our proposed approach’s feasibility and efficacy, and conclude that it may be possible to explore various brain dynamics even with a small number of sensors.


2019 ◽  
Vol 11 (9) ◽  
pp. 1005
Author(s):  
Jiahui Qu ◽  
Yunsong Li ◽  
Qian Du ◽  
Wenqian Dong ◽  
Bobo Xi

Hyperspectral pansharpening is an effective technique to obtain a high spatial resolution hyperspectral (HS) image. In this paper, a new hyperspectral pansharpening algorithm based on homomorphic filtering and weighted tensor matrix (HFWT) is proposed. In the proposed HFWT method, open-closing morphological operation is utilized to remove the noise of the HS image, and homomorphic filtering is introduced to extract the spatial details of each band in the denoised HS image. More importantly, a weighted root mean squared error-based method is proposed to obtain the total spatial information of the HS image, and an optimized weighted tensor matrix based strategy is presented to integrate spatial information of the HS image with spatial information of the panchromatic (PAN) image. With the appropriate integrated spatial details injection, the fused HS image is generated by constructing the suitable gain matrix. Experimental results over both simulated and real datasets demonstrate that the proposed HFWT method effectively generates the fused HS image with high spatial resolution while maintaining the spectral information of the original low spatial resolution HS image.


2019 ◽  
Vol 11 (15) ◽  
pp. 1767 ◽  
Author(s):  
Francesca Pasquetti ◽  
Monica Bini ◽  
Andrea Ciampalini

The aim of this paper is to evaluate the usefulness of TanDEM-X DEM (digital elevation model) for remote geomorphological analysis in Argentinian Patagonia. The use of a DEM with appropriate resolution and coverage might be very helpful and advantageous in vast and hardly accessible areas. TanDEM-X DEM could represent an unprecedented opportunity to identify geomorphological features because of its global coverage, ~12 m spatial resolution and low cost. In this regard, we assessed the vertical accuracy of TanDEM-X DEM through comparison with Differential Global Positioning System (DGPS) datasets collected in two areas of the Patagonia Region during a field survey; we then investigated different types of landforms by creating the elevation profiles. The comparison indicates a high agreement between TanDEM-X DEM and reference values, with a mean absolute vertical error (MAE) of 0.53 m, and a root mean squared error (RMSE) of 0.73 m. The results of landform analysis show an appropriate spatial resolution to detect different features such as beach ridges, which are impossible to delineate with other lower resolution DEMs. For these reasons, TanDEM-X DEM constitutes a useful tool for detailed geomorphological analyses in Argentinian Patagonia.


Author(s):  
Chengfang Zhang

Multifocus image fusion can obtain an image with all objects in focus, which is beneficial for understanding the target scene. Multiscale transform (MST) and sparse representation (SR) have been widely used in multifocus image fusion. However, the contrast of the fused image is lost after multiscale reconstruction, and fine details tend to be smoothed for SR-based fusion. In this paper, we propose a fusion method based on MST and convolutional sparse representation (CSR) to address the inherent defects of both the MST- and SR-based fusion methods. MST is first performed on each source image to obtain the low-frequency components and detailed directional components. Then, CSR is applied in the low-pass fusion, while the high-pass bands are fused using the popular “max-absolute” rule as the activity level measurement. The fused image is finally obtained by performing inverse MST on the fused coefficients. The experimental results on multifocus images show that the proposed algorithm exhibits state-of-the-art performance in terms of definition.


2020 ◽  
Vol 2020 ◽  
pp. 1-16 ◽  
Author(s):  
Bing Huang ◽  
Feng Yang ◽  
Mengxiao Yin ◽  
Xiaoying Mo ◽  
Cheng Zhong

The medical image fusion is the process of coalescing multiple images from multiple imaging modalities to obtain a fused image with a large amount of information for increasing the clinical applicability of medical images. In this paper, we attempt to give an overview of multimodal medical image fusion methods, putting emphasis on the most recent advances in the domain based on (1) the current fusion methods, including based on deep learning, (2) imaging modalities of medical image fusion, and (3) performance analysis of medical image fusion on mainly data set. Finally, the conclusion of this paper is that the current multimodal medical image fusion research results are more significant and the development trend is on the rise but with many challenges in the research field.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4556 ◽  
Author(s):  
Yaochen Liu ◽  
Lili Dong ◽  
Yuanyuan Ji ◽  
Wenhai Xu

In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images.


Sign in / Sign up

Export Citation Format

Share Document