scholarly journals Evaluation of weighted fusion for scalar images in multi-sensor network

2021 ◽  
Vol 10 (2) ◽  
pp. 911-916
Author(s):  
C. Jittawiriyanukoon ◽  
V. Srisarkun

The regular image fusion method based on scalar has the problem how to prioritize and proportionally enrich image details in multi-sensor network. Based on multiple sensors to fuse and manipulate patterns of computer vision is practical. A fusion (integration) rule, bit-depth conversion, and truncation (due to conflict of size) on the image information are studied. Through multi-sensor images, the fusion rule based on weighted priority is employed to restructure prescriptive details of a fused image. Investigational results confirm that the associated details between multiple images are possibly fused, the prescription is executed and finally, features are improved. Visualization for both spatial and frequency domains to support the image analysis is also presented.

2021 ◽  
Vol 38 (3) ◽  
pp. 607-617
Author(s):  
Sumanth Kumar Panguluri ◽  
Laavanya Mohan

Nowadays multimodal image fusion has been majorly utilized as an important processing tool in various image related applications. For capturing useful information different sensors have been developed. Mainly such sensors are infrared (IR) image sensor and visible (VI) image sensor. Fusing both these sensors provides better and accurate scene information. The major application areas where this fused image has been mostly used are military, surveillance, and remote sensing. For better identification of targets and to understand overall scene information, the fused image has to provide better contrast and more edge information. This paper introduces a novel multimodal image fusion method mainly for improving contrast and as well as edge information. Primary step of this algorithm is to resize source images. The 3×3 sharpen filter and morphology hat transform are applied separately on resized IR image and VI image. DWT transform has been used to produce "low-frequency" and "high-frequency" sub-bands. "Filters based mean-weighted fusion rule" and "Filters based max-weighted fusion rule" are newly introduced in this algorithm for combining "low-frequency" sub-bands and "high-frequency" sub-bands respectively. Fused image reconstruction is done with IDWT. Proposed method has outperformed and shown improved results in subjective manner and objectively than similar existing techniques.


2010 ◽  
Vol 121-122 ◽  
pp. 373-378 ◽  
Author(s):  
Jia Zhao ◽  
Li Lü ◽  
Hui Sun

According to the different frequency areas decomposed by shearlet transform, the selection principles of the lowpass subbands and highpass subbands were discussed respectively. The lowpass subband coefficients of the fused image can be obtained by means of the fusion rule based on the region variation, the highpass subband coefficients can be selected by means of the fusion rule based on the region energy. Experimental results show that comparing with traditional image fusion algorithms, the proposed approach can provide more satisfactory fusion outcome.


2014 ◽  
Vol 2014 ◽  
pp. 1-10 ◽  
Author(s):  
Yong Yang ◽  
Wenjuan Zheng ◽  
Shuying Huang

The aim of multifocus image fusion is to fuse the images taken from the same scene with different focuses to obtain a resultant image with all objects in focus. In this paper, a novel multifocus image fusion method based on human visual system (HVS) and back propagation (BP) neural network is presented. Three features which reflect the clarity of a pixel are firstly extracted and used to train a BP neural network to determine which pixel is clearer. The clearer pixels are then used to construct the initial fused image. Thirdly, the focused regions are detected by measuring the similarity between the source images and the initial fused image followed by morphological opening and closing operations. Finally, the final fused image is obtained by a fusion rule for those focused regions. Experimental results show that the proposed method can provide better performance and outperform several existing popular fusion methods in terms of both objective and subjective evaluations.


2014 ◽  
Vol 989-994 ◽  
pp. 1082-1087
Author(s):  
Yan Chun Yang ◽  
Jian Wu Dang ◽  
Yang Ping Wang

In order to further improve the quality of medical image fusion,an improved medical image fusion method, based on nonsubsampled contourlet transform (NSCT),is proposed in the paper. A fusion rule based on the improved pulse coupled neural network (PCNN) is adopted in low frequency sub-band coefficient. Because human visual is more sensitive to all local region pixels instead of single pixel,it is more reasonable that the region information stimulates PCNN instead of single pixel. Each neuron of PCNN model is stimulated by the region spatial frequency of low frequency sub-band coefficient .Low frequency sub-band coefficient is determined by the times of firing. When choosing the bandpass directional sub-band coefficients, the directional characteristics of NSCT has been made best use of in the paper.A fusion rule based on sum-modified Laplacian is presented in bandpass directional sub-band cosfficients.The experiment results show that the proposed method can greatly improve the quality of fusion image compared with traditional fusion methods.


Merging of multiple imaging modalities leads to a single image that acquire high information content. These find useful applications in disease diagnosis and treatment planning. IHS-PCA method is a spatial domain approach for fusion that offersfinestvisibility but demands vast memory and it lacks steering information. We propose an integrated approach that incorporates NSCT combined with PCA utilizing IHS space and histogram matching. The fusion algorithm is applied on MRI with PET image and improved functional property was obtained. The IHS transform is a sharpening technique that converts multispectral image from RGB channels to Intensity Hue and Saturation independent values. Histogram matching is performed with intensity values of the two input images. Pathological details in images can be emphasized in multi-scale and multi-directions by using PCA withNSCT. Fusion rule applied is weighted averaging andprincipal components are used for dimensionality reduction. Inverse NSCT and Inverse IHS are performed so as to obtain the fused image in new RGB space. Visual and subjective investigation is compared with existing methods which demonstrate that our proposed technique gives high structural data content with high spatial and spectral resolution compared withearlier methods.


2021 ◽  
Vol 9 (2) ◽  
pp. 1022-1030
Author(s):  
Shivakumar. C, Et. al.

In this Context-aware computing era, everything is being automated and because of this, smart system’s count been incrementing day by day.  The smart system is all about context awareness, which is a synergy with the objects in the system. The result of the interaction between the users and the sensors is nothing but the repository of the vast amount of context data. Now the challenging task is to represent, store, and retrieve context data. So, in this research work, we have provided solutions to context storage. Since the data generated from the sensor network is dynamic, we have represented data using Context dimension tree, stored the data in cloud-based ‘MongoDB’, which is a NoSQL. It provides dynamic schema and reasoning data using If-Then rules with RETE algorithm. The Novel research work is the integration of NoSQL cloud-based MongoDB, rule-based RETE algorithm and CLIPS tool architecture. This integration helps us to represent, store, retrieve and derive inferences from the context data efficiently..                       


2018 ◽  
Vol 87 ◽  
pp. 33-51 ◽  
Author(s):  
Wei He ◽  
Guan-Yu Hu ◽  
Zhi-Jie Zhou ◽  
Pei-Li Qiao ◽  
Xiao-Xia Han ◽  
...  

Author(s):  
Hui Zhang ◽  
Xinning Han ◽  
Rui Zhang

In the process of multimodal image fusion, how to improve the visual effect after the image fused, while taking into account the protection of energy and the extraction of details, has attracted more and more attention in recent years. Based on the research of visual saliency and the final action-level measurement of the base layer, a multimodal image fusion method based on a guided filter is proposed in this paper. Firstly, multi-scale decomposition of a guided filter is used to decompose the two source images into a small-scale layer, large-scale layer and base layer. The fusion rule of the maximum absolute value is adopted in the small-scale layer, the weight fusion rule based on regular visual parameters is adopted in the large-scale layer and the fusion rule based on activity-level measurement is adopted in the base layer. Finally, the fused three scales are laminated into the final fused image. The experimental results show that the proposed method can improve the image edge processing and visual effect in multimodal image fusion.


Author(s):  
Girraj Prasad Rathor ◽  
Sanjeev Kumar Gupta

Image fusion based on different wavelet transform is the most commonly used image fusion method, which fuses the source pictures data in wavelet space as per some fusion rules. But, because of the uncertainties of the source images contributions to the fused image, to design a good fusion rule to incorporate however much data as could reasonably be expected into the fused picture turns into the most vital issue. On the other hand, adaptive fuzzy logic is the ideal approach to determine uncertain issues, yet it has not been utilized as a part of the outline of fusion rule. A new fusion technique based on wavelet transform and adaptive fuzzy logic is introduced in this chapter. After doing wavelet transform to source images, it computes the weight of each source images coefficients through adaptive fuzzy logic and then fuses the coefficients through weighted averaging with the processed weights to acquire a combined picture: Mutual Information, Peak Signal to Noise Ratio, and Mean Square Error as criterion.


Author(s):  
James C. Harris ◽  
Carole A. Womeldorf

Wind resource assessments produce hundreds of thousands of measurements every year. Before determination of the wind power density, a function of the velocity cubed, those values are screened to remove erroneous data points. Typical categories of erroneous data are sensor malfunction, tower shading, and icing. Identification of tower shading is a well established procedure dependent on the mounting direction of the sensor booms. Most instrument malfunctions are clearly extended flat lines and typically only affect one sensor at a time. Sensor icing of anemometers and directional vanes, on the other hand, can be subtle and affect more than one sensor simultaneously and can require an experienced evaluator’s assessment. Designation of icing results in the removal of lower velocity data. If too few points are removed the wind velocity will be underestimated, while if too many points are removed the wind velocity can be exaggerated, both of which can have a significant influence on the power density due to the cube effect. And experts frequently disagree. Much of this disagreement is driven by the difference between rule based approaches and operator judgment approaches. A comparison of different screening approaches for icing is described in this work. Three different rule-based approaches are compared against a visually-based expert determination that combines multiple sensors, including temperature, humidity, directional vanes and anemometers with several rule based approaches. Relative impacts of different approaches can affect from 1.09% for a visually-based expert approach to 5.03% for a rule-based standard deviation approach.


Sign in / Sign up

Export Citation Format

Share Document