fusion rule
Recently Published Documents


TOTAL DOCUMENTS

244
(FIVE YEARS 77)

H-INDEX

12
(FIVE YEARS 4)

Author(s):  
Mummadi Gowthami Reddy ◽  
Palagiri Veera Narayana Reddy ◽  
Patil Ramana Reddy

In the current era of technological development, medical imaging plays an important role in many applications of medical diagnosis and therapy. In this regard, medical image fusion could be a powerful tool to combine multi-modal images by using image processing techniques. But, conventional approaches failed to provide the effective image quality assessments and robustness of fused image. To overcome these drawbacks, in this work three-stage multiscale decomposition (TSMSD) using pulse-coupled neural networks with adaptive arguments (PCNN-AA) approach is proposed for multi-modal medical image fusion. Initially, nonsubsampled shearlet transform (NSST) is applied onto the source images to decompose them into low frequency and high frequency bands. Then, low frequency bands of both the source images are fused using nonlinear anisotropic filtering with discrete Karhunen–Loeve transform (NLAF-DKLT) methodology. Next, high frequency bands obtained from NSST are fused using PCNN-AA approach. Now, fused low frequency and high frequency bands are reconstructed using NSST reconstruction. Finally, band fusion rule algorithm with pyramid reconstruction is applied to get final fused medical image. Extensive simulation outcome discloses the superiority of proposed TSMSD using PCNN-AA approach as compared to state-of-the-art medical image fusion methods in terms of fusion quality metrics such as entropy (E), mutual information (MI), mean (M), standard deviation (STD), correlation coefficient (CC) and computational complexity.


Author(s):  
Hui Zhang ◽  
Xinning Han ◽  
Rui Zhang

In the process of multimodal image fusion, how to improve the visual effect after the image fused, while taking into account the protection of energy and the extraction of details, has attracted more and more attention in recent years. Based on the research of visual saliency and the final action-level measurement of the base layer, a multimodal image fusion method based on a guided filter is proposed in this paper. Firstly, multi-scale decomposition of a guided filter is used to decompose the two source images into a small-scale layer, large-scale layer and base layer. The fusion rule of the maximum absolute value is adopted in the small-scale layer, the weight fusion rule based on regular visual parameters is adopted in the large-scale layer and the fusion rule based on activity-level measurement is adopted in the base layer. Finally, the fused three scales are laminated into the final fused image. The experimental results show that the proposed method can improve the image edge processing and visual effect in multimodal image fusion.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 33
Author(s):  
Chaowei Duan ◽  
Yiliu Liu ◽  
Changda Xing ◽  
Zhisheng Wang

An efficient method for the infrared and visible image fusion is presented using truncated Huber penalty function smoothing and visual saliency based threshold optimization. The method merges complementary information from multimodality source images into a more informative composite image in two-scale domain, in which the significant objects/regions are highlighted and rich feature information is preserved. Firstly, source images are decomposed into two-scale image representations, namely, the approximate and residual layers, using truncated Huber penalty function smoothing. Benefiting from the edge- and structure-preserving characteristics, the significant objects and regions in the source images are effectively extracted without halo artifacts around the edges. Secondly, a visual saliency based threshold optimization fusion rule is designed to fuse the approximate layers aiming to highlight the salient targets in infrared images and remain the high-intensity regions in visible images. The sparse representation based fusion rule is adopted to fuse the residual layers with the goal of acquiring rich detail texture information. Finally, combining the fused approximate and residual layers reconstructs the fused image with more natural visual effects. Sufficient experimental results demonstrate that the proposed method can achieve comparable or superior performances compared with several state-of-the-art fusion methods in visual results and objective assessments.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 40
Author(s):  
Chaowei Duan ◽  
Changda Xing ◽  
Yiliu Liu ◽  
Zhisheng Wang

As a powerful technique to merge complementary information of original images, infrared (IR) and visible image fusion approaches are widely used in surveillance, target detecting, tracking, and biological recognition, etc. In this paper, an efficient IR and visible image fusion method is proposed to simultaneously enhance the significant targets/regions in all source images and preserve rich background details in visible images. The multi-scale representation based on the fast global smoother is firstly used to decompose source images into the base and detail layers, aiming to extract the salient structure information and suppress the halos around the edges. Then, a target-enhanced parallel Gaussian fuzzy logic-based fusion rule is proposed to merge the base layers, which can avoid the brightness loss and highlight significant targets/regions. In addition, the visual saliency map-based fusion rule is designed to merge the detail layers with the purpose of obtaining rich details. Finally, the fused image is reconstructed. Extensive experiments are conducted on 21 image pairs and a Nato-camp sequence (32 image pairs) to verify the effectiveness and superiority of the proposed method. Compared with several state-of-the-art methods, experimental results demonstrate that the proposed method can achieve more competitive or superior performances according to both the visual results and objective evaluation.


Author(s):  
Md Sipon Miah ◽  
Michael Schukat ◽  
Enda Barrett

AbstractSpectrum sensing in a cognitive radio network involves detecting when a primary user vacates their licensed spectrum, to enable secondary users to broadcast on the same band. Accurately sensing the absence of the primary user ensures maximum utilization of the licensed spectrum and is fundamental to building effective cognitive radio networks. In this paper, we address the issues of enhancing sensing gain, average throughput, energy consumption, and network lifetime in a cognitive radio-based Internet of things (CR-IoT) network using the non-sequential approach. As a solution, we propose a Dempster–Shafer theory-based throughput analysis of an energy-efficient spectrum sensing scheme for a heterogeneous CR-IoT network using the sequential approach, which utilizes firstly the signal-to-noise ratio (SNR) to evaluate the degree of reliability and secondly the time slot of reporting to merge as a flexible time slot of sensing to more efficiently assess spectrum sensing. Before a global decision is made on the basis of both the soft decision fusion rule like the Dempster–Shafer theory and hard decision fusion rule like the “n-out-of-k” rule at the fusion center, a flexible time slot of sensing is added to adjust its measuring result. Using the proposed Dempster–Shafer theory, evidence is aggregated during the time slot of reporting and then a global decision is made at the fusion center. In addition, the throughput of the proposed scheme using the sequential approach is analyzed based on both the soft decision fusion rule and hard decision fusion rule. Simulation results indicate that the new approach improves primary user sensing accuracy by $$13\%$$ 13 % over previous approaches, while concurrently increasing detection probability and decreasing false alarm probability. It also improves overall throughput, reduces energy consumption, prolongs expected lifetime, and reduces global error probability compared to the previous approaches under any condition [part of this paper was presented at the EuCAP2018 conference (Md. Sipon Miah et al. 2018)].


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7813
Author(s):  
Xiaoxue Xing ◽  
Cong Luo ◽  
Jian Zhou ◽  
Minghan Yan ◽  
Cheng Liu ◽  
...  

To get more obvious target information and more texture features, a new fusion method for the infrared (IR) and visible (VIS) images combining regional energy (RE) and intuitionistic fuzzy sets (IFS) is proposed, and this method can be described by several steps as follows. Firstly, the IR and VIS images are decomposed into low- and high-frequency sub-bands by non-subsampled shearlet transform (NSST). Secondly, RE-based fusion rule is used to obtain the low-frequency pre-fusion image, which allows the important target information preserved in the resulting image. Based on the pre-fusion image, the IFS-based fusion rule is introduced to achieve the final low-frequency image, which enables more important texture information transferred to the resulting image. Thirdly, the ‘max-absolute’ fusion rule is adopted to fuse high-frequency sub-bands. Finally, the fused image is reconstructed by inverse NSST. The TNO and RoadScene datasets are used to evaluate the proposed method. The simulation results demonstrate that the fused images of the proposed method have more obvious targets, higher contrast, more plentiful detailed information, and local features. Qualitative and quantitative analysis results show that the presented method is superior to the other nine advanced fusion methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Heng Shen

With the development of science and technology, a variety of electronic devices have entered our lives, making our lives more intelligent and making our work more effective. This article is aimed at studying the application of multisensor data fusion technology to the water dragon boat training monitoring system. In that case, we can analyze the various physical indicators of dragon boat athletes based on the data reflected by these sensors, when they can reach their physical limits and can perform in the best state to obtain the best results. The sensor is used to decompose the relevant data of each part of the athlete’s limbs. This step is based on the image and understands the maximum value of the data to adjust the training goal. This article proposes some data fusion algorithms, using Kalman filter method, Bayesian estimation method, and DS evidence theory algorithm to compare data fusion systems, through the comparison to find the best fusion accuracy, and then get the most suitable method is then applied to this water dragon boat monitoring system to enhance the training efficiency of dragon boat athletes. The experimental results in this paper show that when the value of the parameter increases from 0.97 to 2.5, the average classification accuracy of the k -NN classifier decreases from 0.97 to 0.4, and the accuracy of the fusion results of the three fusion rules is also reduced correspondingly, but in this paper proposed, RP fusion rule still has better performance than the other two fusion rules. When the classifier is k -NN, the three fusion rules increase with the number of sensors, and the accuracy of the fusion results is correspondingly improved. However, the final fusion accuracy obtained by the RP fusion rule proposed in this paper is always better than NB integration rules, and WMV integration rules are higher. Through these analyses, a training program that is most suitable for dragon boat athletes can be worked out, so that the athletes will not be useless. Multisensor data fusion technology brings great convenience to water dragon boat training and can provide more reasonable and accurate data to explore a practical way on the basis of ensuring the safety of personnel.


Sign in / Sign up

Export Citation Format

Share Document