scholarly journals Trustworthy Image Fusion with Deep Learning for Wireless Applications

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Chao Zhang ◽  
Haojin Hu ◽  
Yonghang Tai ◽  
Lijun Yun ◽  
Jun Zhang

To fuse infrared and visible images in wireless applications, the extraction and transmission of characteristic information security is an important task. The fused image quality depends on the effectiveness of feature extraction and the transmission of image pair characteristics. However, most fusion approaches based on deep learning do not make effective use of the features for image fusion, which results in missing semantic content in the fused image. In this paper, a novel trustworthy image fusion method is proposed to address these issues, which applies convolutional neural networks for feature extraction and blockchain technology to protect sensitive information. The new method can effectively reduce the loss of feature information by making the output of the feature extraction network in each convolutional layer to be fed to the next layer along with the production of the previous layer, and in order to ensure the similarity between the fused image and the original image, the original input image feature map is used as the input of the reconstruction network in the image reconstruction network. Compared to other methods, the experimental results show that our proposed method can achieve better quality and satisfy human perception.

2021 ◽  
Vol 51 (2) ◽  
Author(s):  
Yingchun Wu, , , , ◽  
Xing Cheng ◽  
Jie Liang ◽  
Anhong Wang ◽  
Xianling Zhao

Traditional light field all-in-focus image fusion algorithms are based on the digital refocusing technique. Multi-focused images converted from one single light field image are used to calculate the all-in-focus image and the light field spatial information is used to accomplish the sharpness evaluation. Analyzing the 4D light field from another perspective, an all-in-focus image fusion algorithm based on angular information is presented in this paper. In the proposed method, the 4D light field data are fused directly and a macro-pixel energy difference function based on angular information is established to accomplish the sharpness evaluation. Then the fused 4D data is guided by the dimension increased central sub-aperture image to obtain the refined 4D data. Finally, the all-in-focus image is calculated by integrating the refined 4D light field data. Experimental results show that the fused images calculated by the proposed method have higher visual quality. Quantitative evaluation results also demonstrate the performance of the proposed algorithm. With the light field angular information, the image feature-based index and human perception inspired index of the fused image are improved.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 863
Author(s):  
Vidas Raudonis ◽  
Agne Paulauskaite-Taraseviciene ◽  
Kristina Sutiene

Background: Cell detection and counting is of essential importance in evaluating the quality of early-stage embryo. Full automation of this process remains a challenging task due to different cell size, shape, the presence of incomplete cell boundaries, partially or fully overlapping cells. Moreover, the algorithm to be developed should process a large number of image data of different quality in a reasonable amount of time. Methods: Multi-focus image fusion approach based on deep learning U-Net architecture is proposed in the paper, which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Results: The experiment includes the visual and quantitative analysis by estimating the image similarity metrics and processing times, which is compared to the results achieved by two wellknown techniques—Inverse Laplacian Pyramid Transform and Enhanced Correlation Coefficient Maximization. Conclusion: Comparatively, the image fusion time is substantially improved for different image resolutions, whilst ensuring the high quality of the fused image.


2021 ◽  
Vol 2083 (4) ◽  
pp. 042007
Author(s):  
Xiaowen Liu ◽  
Juncheng Lei

Abstract Image recognition technology mainly includes image feature extraction and classification recognition. Feature extraction is the key link, which determines whether the recognition performance is good or bad. Deep learning builds a model by building a hierarchical model structure like the human brain, extracting features layer by layer from the data. Applying deep learning to image recognition can further improve the accuracy of image recognition. Based on the idea of clustering, this article establishes a multi-mix Gaussian model for engineering image information in RGB color space through offline learning and expectation-maximization algorithms, to obtain a multi-mix cluster representation of engineering image information. Then use the sparse Gaussian machine learning model on the YCrCb color space to quickly learn the distribution of engineering images online, and design an engineering image recognizer based on multi-color space information.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7467
Author(s):  
Shih-Lin Lin

Rolling bearings are important in rotating machinery and equipment. This research proposes variational mode decomposition (VMD)-DenseNet to diagnose faults in bearings. The research feature involves analyzing the Hilbert spectrum through VMD whereby the vibration signal is converted into an image. Healthy and various faults show different characteristics on the image, thus there is no need to select features. Coupled with the lightweight network, DenseNet, for image classification and prediction. DenseNet is used to build a model of motor fault diagnosis; its structure is simple, and the calculation speed is fast. The method of using DenseNet for image feature learning can perform feature extraction on each image block of the image, providing full play to the advantages of deep learning to obtain accurate results. This research method is verified by the data of the time-varying bearing experimental device at the University of Ottawa. Through the four links of signal acquisition, feature extraction, fault identification, and prediction, a mechanical intelligent fault diagnosis system has established the state of bearing. The experimental results show that the method can accurately identify four common motor faults, with a VMD-DenseNet prediction accuracy rate of 92%. It provides a more effective method for bearing fault diagnosis and has a wide range of application prospects in fault diagnosis engineering. In the future, online and timely diagnosis can be achieved for intelligent fault diagnosis.


2021 ◽  
pp. 1-24
Author(s):  
F. Sangeetha Francelin Vinnarasi ◽  
Jesline Daniel ◽  
J.T. Anita Rose ◽  
R. Pugalenthi

Multi-modal image fusion techniques aid the medical experts in better disease diagnosis by providing adequate complementary information from multi-modal medical images. These techniques enhance the effectiveness of medical disorder analysis and classification of results. This study aims at proposing a novel technique using deep learning for the fusion of multi-modal medical images. The modified 2D Adaptive Bilateral Filters (M-2D-ABF) algorithm is used in the image pre-processing for filtering various types of noises. The contrast and brightness are improved by applying the proposed Energy-based CLAHE algorithm in order to preserve the high energy regions of the multimodal images. Images from two different modalities are first registered using mutual information and then registered images are fused to form a single image. In the proposed fusion scheme, images are fused using Siamese Neural Network and Entropy (SNNE)-based image fusion algorithm. Particularly, the medical images are fused by using Siamese convolutional neural network structure and the entropy of the images. Fusion is done on the basis of score of the SoftMax layer and the entropy of the image. The fused image is segmented using Fast Fuzzy C Means Clustering Algorithm (FFCMC) and Otsu Thresholding. Finally, various features are extracted from the segmented regions. Using the extracted features, classification is done using Logistic Regression classifier. Evaluation is performed using publicly available benchmark dataset. Experimental results using various pairs of multi-modal medical images reveal that the proposed multi-modal image fusion and classification techniques compete the existing state-of-the-art techniques reported in the literature.


2021 ◽  
Author(s):  
Baori Zhang ◽  
Yonghua Shi ◽  
Yanxin Cui ◽  
Zishun Wang ◽  
Xiyin Chen

Abstract The high dynamic range existing in arc welding with high energy density challenges most of the industrial cameras, causing badly exposed pixels in the captured images and bringing difficulty to the feature detection from internal weld pool. This paper proposes a novel monitoring method called adaptive image fusion, which increases the amount of information contained in the welding image and can be realized on the common industrial camera with low cost. It combines original images captured rapidly by the camera into one fused image and the setting of these images is based on the real time analysis of realistic scene irradiance during the welding. Experiments are carried out to find out the operating window for the adaptive image fusion method, providing the rules for getting a fused image with as much as information as possible. The comparison between the imaging with or without the proposed method proves that the fused image has a wider dynamic range and includes more useful features from the weld pool. The improvement is also verified by extracting both the internal and external features of weld pool within a same fused image with proposed method. The results show that the proposed method can adaptively expand the dynamic range of visual monitoring system with low cost, which benefits the feature extraction from the internal weld pool.


2022 ◽  
Vol 2022 ◽  
pp. 1-12
Author(s):  
Chuanbao Niu ◽  
Mingzhu Zhang

This paper presents an in-depth study and analysis of the image feature extraction technique for ancient ceramic identification using an algorithm of partial differential equations. Image features of ancient ceramics are closely related to specific raw material selection and process technology, and complete acquisition of image features of ancient ceramics is a prerequisite for achieving image feature identification of ancient ceramics, since the quality of extracted area-grown ancient ceramic image feature extraction method is closely related to the background pixels and does not have generalizability. In this paper, we propose a deep learning-based extraction method, using Eased as a deep learning support platform, to extract and validate 5834 images of 272 types of ancient ceramics from kilns, celadon, and Yue kilns after manual labelling and training learning, and the results show that the average complete extraction rate is higher than 99%. The implementation of the deep learning method is summarized and compared with the traditional region growth extraction method, and the results show that the method is robust with the increase of the learning amount and has generalizability, which is a new method to effectively achieve the complete image feature extraction of ancient ceramics. The main content of the finite difference method is to use the ratio of the difference between the function values of two adjacent points and the distance between the two points to approximate the partial derivative of the function with respect to the variable. This idea was used to turn the problem of division into a problem of difference. Recognition of ancient ceramic image features was realized based on the extraction of the overall image features of ancient ceramics, the extraction and recognition of vessel type features, the quantitative recognition of multidimensional feature fusion ornamentation image features, and the implementation of deep learning based on inscription model recognition image feature classification recognition method; three-layer B/S architecture web application system and cross-platform system language called as the architectural support; and database services, deep learning packaging, and digital image processing. The specific implementation method is based on database service, deep learning encapsulation, digital image processing, and third-party invocation, and the service layer fusion and relearning mechanism is proposed to achieve the preliminary intelligent recognition system of ancient ceramic vessel type and ornament image features. The results of the validation test meet the expectation and verify the effectiveness of the ancient ceramic vessel type and ornament image feature recognition system.


Computers ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 98
Author(s):  
Nishant Kumar ◽  
Stefan Gumhold

Image fusion helps in merging two or more images to construct a more informative single fused image. Recently, unsupervised learning-based convolutional neural networks (CNN) have been used for different types of image-fusion tasks such as medical image fusion, infrared-visible image fusion for autonomous driving as well as multi-focus and multi-exposure image fusion for satellite imagery. However, it is challenging to analyze the reliability of these CNNs for the image-fusion tasks since no groundtruth is available. This led to the use of a wide variety of model architectures and optimization functions yielding quite different fusion results. Additionally, due to the highly opaque nature of such neural networks, it is difficult to explain the internal mechanics behind its fusion results. To overcome these challenges, we present a novel real-time visualization tool, named FuseVis, with which the end-user can compute per-pixel saliency maps that examine the influence of the input image pixels on each pixel of the fused image. We trained several image fusion-based CNNs on medical image pairs and then using our FuseVis tool we performed case studies on a specific clinical application by interpreting the saliency maps from each of the fusion methods. We specifically visualized the relative influence of each input image on the predictions of the fused image and showed that some of the evaluated image-fusion methods are better suited for the specific clinical application. To the best of our knowledge, currently, there is no approach for visual analysis of neural networks for image fusion. Therefore, this work opens a new research direction to improve the interpretability of deep fusion networks. The FuseVis tool can also be adapted in other deep neural network-based image processing applications to make them interpretable.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6065
Author(s):  
Shih-Lin Lin

Motor failure is one of the biggest problems in the safe and reliable operation of large mechanical equipment such as wind power equipment, electric vehicles, and computer numerical control machines. Fault diagnosis is a method to ensure the safe operation of motor equipment. This research proposes an automatic fault diagnosis system combined with variational mode decomposition (VMD) and residual neural network 101 (ResNet101). This method unifies the pre-analysis, feature extraction, and health status recognition of motor fault signals under one framework to realize end-to-end intelligent fault diagnosis. Research data are used to compare the performance of the three models through a data set released by the Federal University of Rio de Janeiro (UFRJ). VMD is a non-recursive adaptive signal decomposition method that is suitable for processing the vibration signals of motor equipment under variable working conditions. Applied to bearing fault diagnosis, high-dimensional fault features are extracted. Deep learning shows an absolute advantage in the field of fault diagnosis with its powerful feature extraction capabilities. ResNet101 is used to build a model of motor fault diagnosis. The method of using ResNet101 for image feature learning can extract features for each image block of the image and give full play to the advantages of deep learning to obtain accurate results. Through the three links of signal acquisition, feature extraction, and fault identification and prediction, a mechanical intelligent fault diagnosis system is established to identify the healthy or faulty state of a motor. The experimental results show that this method can accurately identify six common motor faults, and the prediction accuracy rate is 94%. Thus, this work provides a more effective method for motor fault diagnosis that has a wide range of application prospects in fault diagnosis engineering.


Sign in / Sign up

Export Citation Format

Share Document