scholarly journals Fast Multi-Focus Fusion Based on Deep Learning for Early-Stage Embryo Image Enhancement

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 863
Author(s):  
Vidas Raudonis ◽  
Agne Paulauskaite-Taraseviciene ◽  
Kristina Sutiene

Background: Cell detection and counting is of essential importance in evaluating the quality of early-stage embryo. Full automation of this process remains a challenging task due to different cell size, shape, the presence of incomplete cell boundaries, partially or fully overlapping cells. Moreover, the algorithm to be developed should process a large number of image data of different quality in a reasonable amount of time. Methods: Multi-focus image fusion approach based on deep learning U-Net architecture is proposed in the paper, which allows reducing the amount of data up to 7 times without losing spectral information required for embryo enhancement in the microscopic image. Results: The experiment includes the visual and quantitative analysis by estimating the image similarity metrics and processing times, which is compared to the results achieved by two wellknown techniques—Inverse Laplacian Pyramid Transform and Enhanced Correlation Coefficient Maximization. Conclusion: Comparatively, the image fusion time is substantially improved for different image resolutions, whilst ensuring the high quality of the fused image.

2011 ◽  
Vol 1 (3) ◽  
Author(s):  
T. Sumathi ◽  
M. Hemalatha

AbstractImage fusion is the method of combining relevant information from two or more images into a single image resulting in an image that is more informative than the initial inputs. Methods for fusion include discrete wavelet transform, Laplacian pyramid based transform, curvelet based transform etc. These methods demonstrate the best performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion. In particular, wavelet transform has good time-frequency characteristics. However, this characteristic cannot be extended easily to two or more dimensions with separable wavelet experiencing limited directivity when spanning a one-dimensional wavelet. This paper introduces the second generation curvelet transform and uses it to fuse images together. This method is compared against the others previously described to show that useful information can be extracted from source and fused images resulting in the production of fused images which offer clear, detailed information.


Author(s):  
B. Saichandana ◽  
K. Srinivas ◽  
R. KiranKumar

<p>Hyperspectral remote sensors collect image data for a large number of narrow, adjacent spectral bands. Every pixel in hyperspectral image involves a continuous spectrum that is used to classify the objects with great detail and precision. This paper presents hyperspectral image classification mechanism using genetic algorithm with empirical mode decomposition and image fusion used in preprocessing stage. 2-D Empirical mode decomposition method is used to remove any noisy components in each band of the hyperspectral data. After filtering, image fusion is performed on the hyperspectral bands to selectively merge the maximum possible features from the source images to form a single image. This fused image is classified using genetic algorithm. Different indices, such as K-means (KMI), Davies-Bouldin Index (DBI), and Xie-Beni Index (XBI) are used as objective functions. This method increases classification accuracy of hyperspectral image.</p>


2010 ◽  
Vol 07 (02) ◽  
pp. 99-107 ◽  
Author(s):  
NEMIR AL-AZZAWI ◽  
WAN AHMED K. WAN ABDULLAH

Medical image fusion has been used to derive useful information from multimodality medical image data. This paper presents a dual-tree complex contourlet transform (DT-CCT) based approach for the fusion of magnetic resonance image (MRI) and computed tomography (CT) image. The objective of the fusion of an MRI and a CT image of the same organ is to obtain a single image containing as much information as possible about that organ for diagnosis. The limitation of directional information of dual-tree complex wavelet (DT-CWT) is rectified in DT-CCT by incorporating directional filter banks (DFB) into the DT-CWT. To improve the fused image quality, we propose a new method for fusion rules based on the principle component analysis (PCA) which depend on frequency component of DT-CCT coefficients (contourlet domain). For low frequency coefficients, PCA method is adopted and for high frequency coefficients, the salient features are picked up based on local energy. The final fusion image is obtained by directly applying inverse dual tree complex contourlet transform (IDT-CCT) to the fused low and high frequency coefficients. The DT-CCT produces images with improved contours and textures, while the property of shift invariance is retained. The experimental results showed that the proposed method produces fixed image with extensive features on multimodality.


2013 ◽  
Vol 860-863 ◽  
pp. 2846-2849
Author(s):  
Ming Jing Li ◽  
Yu Bing Dong ◽  
Xiao Li Wang

Image fusion is process which combine relevant information from two or more images into a single image. The aim of fusion is to extract relevant information for research. According to different application and characteristic of algorithm, image fusion algorithm could be used to improve quality of image. This paper complete compare analyze of image fusion algorithm based on wavelet transform and Laplacian pyramid. In this paper, principle, operation, steps and characteristic of fusion algorithm are summarized, advantage and disadvantage of different algorithm are compared. The fusion effects of different fusion algorithm are given by MATLAB. Experimental results shows that quality of fused image would be improve obviously.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Chao Zhang ◽  
Haojin Hu ◽  
Yonghang Tai ◽  
Lijun Yun ◽  
Jun Zhang

To fuse infrared and visible images in wireless applications, the extraction and transmission of characteristic information security is an important task. The fused image quality depends on the effectiveness of feature extraction and the transmission of image pair characteristics. However, most fusion approaches based on deep learning do not make effective use of the features for image fusion, which results in missing semantic content in the fused image. In this paper, a novel trustworthy image fusion method is proposed to address these issues, which applies convolutional neural networks for feature extraction and blockchain technology to protect sensitive information. The new method can effectively reduce the loss of feature information by making the output of the feature extraction network in each convolutional layer to be fed to the next layer along with the production of the previous layer, and in order to ensure the similarity between the fused image and the original image, the original input image feature map is used as the input of the reconstruction network in the image reconstruction network. Compared to other methods, the experimental results show that our proposed method can achieve better quality and satisfy human perception.


2021 ◽  
pp. 107142
Author(s):  
Giulia Sivelli ◽  
Gaurasundar M. Conley ◽  
Carolina Herrera ◽  
Kathryn Marable ◽  
Kyle J. Rodriguez ◽  
...  

Author(s):  
Sunil S S Et.al

Diabetes Retinopathy (DR) is an eye disorder that affects the human retina due to increased insulin levels in the blood. Early detection and diagnosis of DR is essential in the optimal treatment of diabetic patients. The current research is to develop controls for identifying different characteristics and differences in colour  retina and using different classifications. This therapeutic approach describes diabetes recovery from data collected from multiple fields including DRIDB0, DRIDB1, MESSIDOR, STARE and HRF. Here  machine learning, neural networks and deep learning algorithms issues are addressed with related topics such as Sensitivity, Precision, Accuracy, Error,   Specificity and F1-score, Mathews Correlation Coefficient (MCC) and coefficient of kappa are compared. Finally due to the deep learning strategy the results were more effective compared to other methods. The system can help ophthalmologists, to identify the symptoms of diabetes at an early stage, for better treatment and to improve the quality of life biology.


2019 ◽  
Vol 26 (2) ◽  
pp. 132-133
Author(s):  
Satoka Aoyagi ◽  
Tomomi Akiyama ◽  
Takayuki Yamagishi

Author(s):  
Rajesh Dharmaraj ◽  
Christopher Durairaj Daniel Dharmaraj

Image fusion is used to intensify the quality of images by combining two images of same scene obtained from different techniques. The present work deals with the effective extraction of pixel information from the source images that hold the key to multi focus image fusion. A solely vicinity-based image matting algorithm that relies on the close pixel clusters in the input images and their trimap, is presented in this article. The pixel cluster size, N plays a significant role in deciding the identity of the unknown pixel. The distance between each unknown pixel from foreground and background pixel clusters has been computed based on minimum quasi Euclidean distance. The minimum distance ratio gives the alpha value of each unknown pixel in the image. Finally, the focus regions are blend together to obtain the resultant fused image. On perceiving the results visually and objectively, it is concluded that proposed method works better in extracting the focused pixels and improving fusion quality, compared with other existing fusion methods.


Sign in / Sign up

Export Citation Format

Share Document