fusion methods
Recently Published Documents


TOTAL DOCUMENTS

791
(FIVE YEARS 280)

H-INDEX

33
(FIVE YEARS 9)

Author(s):  
Javier Medina ◽  
Nelson Vera ◽  
Erika Upegui

I<span>Image-fusion provide users with detailed information about the urban and rural environment, which is useful for applications such as urban planning and management when higher spatial resolution images are not available. There are different image fusion methods. This paper implements, evaluates, and compares six satellite image-fusion methods, namely wavelet 2D-M transform, gram schmidt, high-frequency modulation, high pass filter (HPF) transform, simple mean value, and PCA. An Ikonos image (Panchromatic-PAN and multispectral-MULTI) showing the northwest of Bogotá (Colombia) is used to generate six fused images</span>: MULTI<sub>Wavelet 2D-M</sub>, MULTI<sub>G-S</sub>, MULTI<sub>MHF</sub>, MULTI<sub>HPF</sub>, MULTI<sub>SMV</sub>, and MULTI<sub>PCA</sub>. <span>In order to assess the efficiency of the six image-fusion methods, the resulting images were evaluated in terms of both spatial quality and spectral quality. To this end, four metrics were applied, namely the correlation index, erreur relative globale adimensionnelle de synthese (ERGAS), relative average spectral error (RASE) and the Q index. The best results were obtained for the </span> MULTI<sub>SMV</sub> image, which exhibited spectral correlation higher than 0.85, a Q index of 0.84, and the highest scores in spectral assessment according to ERGAS and RASE, 4.36% and 17.39% respectively.


According to the ubiquitous computing paradigm, dispersed computers within the home environment can support the residents’ health by being aware of all the developing and evolving situations. The context-awareness of the supporting computers stems from the data acquisition of the occurring events at home. In some cases, different sensors provide input of identical type, thereby raising conflict-related issues. Thus, for each type of input data, fusion methods must be applied on the raw data to obtain a dominant input value. Also, for diagnostic inference purpose, data fusion methods must be applied on the values of the available classes of multiple contextual data structures. Dempster-Shafer theory offers the algorithmic tools to efficiently fuse the data of each input type or class. The employment of threading technology accelerates the computational process and carrying out benchmarks on publicly available data set, is shown to be more efficient. Thus, threading technology proved promising for home UbiHealth applications by lowering the number of required cooperating computers.


Forests ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 33
Author(s):  
Xueliang Wang ◽  
Honge Ren

Multi-source data remote sensing provides innovative technical support for tree species recognition. Tree species recognition is relatively poor despite noteworthy advancements in image fusion methods because the features from multi-source data for each pixel in the same region cannot be deeply exploited. In the present paper, a novel deep learning approach for hyperspectral imagery is proposed to improve accuracy for the classification of tree species. The proposed method, named the double branch multi-source fusion (DBMF) method, could more deeply determine the relationship between multi-source data and provide more effective information. The DBMF method does this by fusing spectral features extracted from a hyperspectral image (HSI) captured by the HJ-1A satellite and spatial features extracted from a multispectral image (MSI) captured by the Sentinel-2 satellite. The network has two branches in the spatial branch to avoid the risk of information loss, of which, sandglass blocks are embedded into a convolutional neural network (CNN) to extract the corresponding spatial neighborhood features from the MSI. Simultaneously, to make the useful spectral feature transfer more effective in the spectral branch, we employed bidirectional long short-term memory (Bi-LSTM) with a triple attention mechanism to extract the spectral features of each pixel in the HSI with low resolution. The feature information is fused to classify the tree species after the addition of a fusion activation function, which could allow the network to obtain more interactive information. Finally, the fusion strategy allows for the prediction of the full classification map of three study areas. Experimental results on a multi-source dataset show that DBMF has a significant advantage over other state-of-the-art frameworks.


2021 ◽  
Vol 14 (1) ◽  
pp. 113
Author(s):  
Yohann Constans ◽  
Sophie Fabre ◽  
Michael Seymour ◽  
Vincent Crombez ◽  
Yannick Deville ◽  
...  

Hyperspectral pansharpening methods in the reflective domain are limited by the large difference between the visible panchromatic (PAN) and hyperspectral (HS) spectral ranges, which notably leads to poor representation of the SWIR (1.0–2.5 μm) spectral domain. A novel instrument concept is proposed in this study, by introducing a second PAN channel in the SWIR II (2.0–2.5 μm) spectral domain. Two extended fusion methods are proposed to process both PAN channels, namely, Gain-2P and CONDOR-2P: the first one is an extended version of the Brovey transform, whereas the second one adds mixed pixel preprocessing steps to Gain-2P. By following an exhaustive performance-assessment protocol including global, refined, and local numerical analyses supplemented by supervised classification, we evaluated the updated methods on peri-urban and urban datasets. The results confirm the significant contribution of the second PAN channel (up to 45% of improvement for both datasets with the mean normalised gap in the reflective domain and 60% in the SWIR domain only) and reveal a clear advantage for CONDOR-2P (as compared with Gain-2P) regarding the peri-urban dataset.


Author(s):  
Bing Zhai ◽  
Yu Guan ◽  
Michael Catt ◽  
Thomas Plötz

Sleep is a fundamental physiological process that is essential for sustaining a healthy body and mind. The gold standard for clinical sleep monitoring is polysomnography(PSG), based on which sleep can be categorized into five stages, including wake/rapid eye movement sleep (REM sleep)/Non-REM sleep 1 (N1)/Non-REM sleep 2 (N2)/Non-REM sleep 3 (N3). However, PSG is expensive, burdensome and not suitable for daily use. For long-term sleep monitoring, ubiquitous sensing may be a solution. Most recently, cardiac and movement sensing has become popular in classifying three-stage sleep, since both modalities can be easily acquired from research-grade or consumer-grade devices (e.g., Apple Watch). However, how best to fuse the data for greatest accuracy remains an open question. In this work, we comprehensively studied deep learning (DL)-based advanced fusion techniques consisting of three fusion strategies alongside three fusion methods for three-stage sleep classification based on two publicly available datasets. Experimental results demonstrate important evidences that three-stage sleep can be reliably classified by fusing cardiac/movement sensing modalities, which may potentially become a practical tool to conduct large-scale sleep stage assessment studies or long-term self-tracking on sleep. To accelerate the progression of sleep research in the ubiquitous/wearable computing community, we made this project open source, and the code can be found at: https://github.com/bzhai/Ubi-SleepNet.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 33
Author(s):  
Chaowei Duan ◽  
Yiliu Liu ◽  
Changda Xing ◽  
Zhisheng Wang

An efficient method for the infrared and visible image fusion is presented using truncated Huber penalty function smoothing and visual saliency based threshold optimization. The method merges complementary information from multimodality source images into a more informative composite image in two-scale domain, in which the significant objects/regions are highlighted and rich feature information is preserved. Firstly, source images are decomposed into two-scale image representations, namely, the approximate and residual layers, using truncated Huber penalty function smoothing. Benefiting from the edge- and structure-preserving characteristics, the significant objects and regions in the source images are effectively extracted without halo artifacts around the edges. Secondly, a visual saliency based threshold optimization fusion rule is designed to fuse the approximate layers aiming to highlight the salient targets in infrared images and remain the high-intensity regions in visible images. The sparse representation based fusion rule is adopted to fuse the residual layers with the goal of acquiring rich detail texture information. Finally, combining the fused approximate and residual layers reconstructs the fused image with more natural visual effects. Sufficient experimental results demonstrate that the proposed method can achieve comparable or superior performances compared with several state-of-the-art fusion methods in visual results and objective assessments.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 24
Author(s):  
Yan-Tsung Peng ◽  
He-Hao Liao ◽  
Ching-Fu Chen

In contrast to conventional digital images, high-dynamic-range (HDR) images have a broader range of intensity between the darkest and brightest regions to capture more details in a scene. Such images are produced by fusing images with different exposure values (EVs) for the same scene. Most existing multi-scale exposure fusion (MEF) algorithms assume that the input images are multi-exposed with small EV intervals. However, thanks to emerging spatially multiplexed exposure technology that can capture an image pair of short and long exposure simultaneously, it is essential to deal with two-exposure image fusion. To bring out more well-exposed contents, we generate a more helpful intermediate virtual image for fusion using the proposed Optimized Adaptive Gamma Correction (OAGC) to have better contrast, saturation, and well-exposedness. Fusing the input images with the enhanced virtual image works well even though both inputs are underexposed or overexposed, which other state-of-the-art fusion methods could not handle. The experimental results show that our method performs favorably against other state-of-the-art image fusion methods in generating high-quality fusion results.


2021 ◽  
Author(s):  
Rogers F Silva ◽  
Eswar Damaraju ◽  
Xinhui Li ◽  
Peter Kochonov ◽  
Aysenil Belger ◽  
...  

With the increasing availability of large-scale multimodal neuroimaging datasets, it is necessary to develop data fusion methods which can extract cross-modal features. A general framework, multidataset independent subspace analysis (MISA), has been developed to encompass multiple blind source separation approaches and identify linked cross-modal components in multiple datasets. In this work we utilized the multimodal independent vector analysis model in MISA to directly identify meaningful linked features across three neuroimaging modalities --- structural magnetic resonance imaging (MRI), resting state functional MRI and diffusion MRI --- in two large independent datasets, one comprising of healthy subjects and the other including patients with schizophrenia. Results show several linked subject profiles (the sources/components) that capture age-associated reductions, schizophrenia-related biomarkers, sex effects, and cognitive performance.


2021 ◽  
Vol 72 ◽  
pp. 1281-1305
Author(s):  
Atefe Pakzad ◽  
Morteza Analoui

Distributional semantic models represent the meaning of words as vectors. We introduce a selection method to learn a vector space that each of its dimensions is a natural word. The selection method starts from the most frequent words and selects a subset, which has the best performance. The method produces a vector space that each of its dimensions is a word. This is the main advantage of the method compared to fusion methods such as NMF, and neural embedding models. We apply the method to the ukWaC corpus and train a vector space of N=1500 basis words. We report tests results on word similarity tasks for MEN, RG-65, SimLex-999, and WordSim353 gold datasets. Also, results show that reducing the number of basis vectors from 5000 to 1500 reduces accuracy by about 1.5-2%. So, we achieve good interpretability without a large penalty. Interpretability evaluation results indicate that the word vectors obtained by the proposed method using N=1500 are more interpretable than word embedding models, and the baseline method. We report the top 15 words of 1500 selected basis words in this paper.


Sign in / Sign up

Export Citation Format

Share Document