multimodal images
Recently Published Documents


TOTAL DOCUMENTS

117
(FIVE YEARS 49)

H-INDEX

9
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Mayukhmala Jana ◽  
Subhosri Basu ◽  
Arpita Das

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7043
Author(s):  
Xiaoteng Zhou ◽  
Changli Yu ◽  
Xin Yuan ◽  
Citong Luo

In the field of underwater vision, image matching between the main two sensors (sonar and optical camera) has always been a challenging problem. The independent imaging mechanism of the two determines the modalities of the image, and the local features of the images under various modalities are significantly different, which makes the general matching method based on the optical image invalid. In order to make full use of underwater acoustic and optical images, and promote the development of multisensor information fusion (MSIF) technology, this letter proposes to apply an image attribute transfer algorithm and advanced local feature descriptor to solve the problem of underwater acousto-optic image matching. We utilize real and simulated underwater images for testing; experimental results show that our proposed method could effectively preprocess these multimodal images to obtain an accurate matching result, thus providing a new solution for the underwater multisensor image matching task.


2021 ◽  
Vol 11 (8) ◽  
pp. 2071-2079
Author(s):  
Kiranmai Bellam ◽  
N. Krishnaraj ◽  
T. Jayasankar ◽  
N. B. Prakash ◽  
G. R. Hemalakshmi

Multimodal medical imaging is an indispensable requirement in the treatment of various pathologies to accelerate care. Rather than discrete images, a composite image combining complementary features from multimodal images is highly informative for clinical examinations, surgical planning, and progress monitoring. In this paper, a deep learning fusion model is proposed for the fusion of medical multimodal images. Based on pyramidal and residual learning units, the proposed model, strengthened with adaptive fusion rules, is tested on image pairs from a standard dataset. The potential of the proposed model for enhanced image exams is shown by fusion studies with deep network images and quantitative output metrics of magnetic resonance imaging and positron emission tomography (MRI/PET) and magnetic resonance imaging and single-photon emission computed tomography (MRI/SPECT). The proposed fusion model achieves the Structural Similarity Index Measure (SSIM) values of 0.9502 and 0.8103 for the MRI/SPECT and MRI/PET MRI/SPECT image sets, signifying the perceptual visual consistency of the fused images. Testing is performed on 20 pairs of MRI/SPECT and MRI/PET images. Similarly, the Mutual Information (MI) values of 2.7455 and 2.7776 obtained for the MRI/SPECT and MRI/PET image sets, indicating the model’s ability to capture the information content from the source images to the composite image. Further, the proposed model allows deploying its variants, introducing refinements on the basic model suitable for the fusion of low and high-resolution medical images of diverse modalities.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Dong Zhao

Due to significant differences in imaging mechanisms between multimodal images, registration methods have difficulty in achieving the ideal effect in terms of time consumption and matching precision. Therefore, this paper puts forward a rapid and robust method for multimodal image registration by exploiting local edge information. The method is based on the framework of SURF and can simultaneously achieve real time and accuracy. Due to the unpredictability of multimodal images’ textures, the local edge descriptor is built based on the edge histogram of neighborhood around keypoints. Moreover, in order to increase the robustness of the whole algorithm and maintain the SURF’s fast characteristic, saliency assessment of keypoints and the concept of self-similar factor are presented and introduced. Experimental results show that the proposed method achieves higher precision and consumes less time than other multimodality registration methods. In addition, the robustness and stability of the method are also demonstrated in the presence of image blurring, rotation, noise, and luminance variations.


Author(s):  
Sumathi R. ◽  
Venkatesulu Mandadi

The authors designed an automated framework to segment tumors with various image sequences like T1, T2, and post-processed MRI multimodal images. Contrast-limited adaptive histogram equalization method is used for preprocessing images to enhance the intensity level and view the tumor part clearly. With the combination of kernel possibilistic c means clustering with particle swarm optimization technique, a tumor part is segmented, and morphological filters are applied to remove the unrelated outlier pixels in the segmented image to detect the accurate tumor part. The authors collected various image sequences from online resources like Harvard brain dataset, BRATS, and RIDER, and a few from clinical datasets. Efficiency is ensured by computing various performance metrics like Jaccard Index MSE, PSNR, sensitivity, specificity, accuracy, and computational time. The proposed approach yields 97.06% segmentation accuracy and 98.08% classification accuracy for multimodal images with an average of 5s for all multimodal images.


Author(s):  
R. Amirtha Varshini Et.al

Histogram computation is the crucial task used in processing so many image guided applications like pattern recognition, image segmentation etc. Image registration is one of the fundamental techniques for pre-processing of the images. Registration is the process of overlaying multiple images to geometrically align them. In medical Image processing, the improper registration can have negative impact on the analysis of the image which influences the final diagnosis. The accurate result of image registration is obtained by matching of multimodal images. Mutual Information is one of the commonly used techniques to find the similarity measurement between multi-modal images. Measurement of similarity requires a computation of histogram of individual images and joint histogram between the images. The hardware implementation of histogram computation has advantages in terms of flexible design, low power consumption, high speed, less execution time than the software implementation. This paper proposed a parallel algorithm for histogram computation. A memory based pipeline architecture is designed for implementing the proposed algorithm. The hardware mapping of the algorithm on FPGA is proposed and simulating them using Xilinx software tools.


2021 ◽  
Vol 13 (7) ◽  
pp. 1380
Author(s):  
Sébastien Dandrifosse ◽  
Alexis Carlier ◽  
Benjamin Dumont ◽  
Benoît Mercatoris

Multimodal images fusion has the potential to enrich the information gathered by multi-sensor plant phenotyping platforms. Fusion of images from multiple sources is, however, hampered by the technical lock of image registration. The aim of this paper is to provide a solution to the registration and fusion of multimodal wheat images in field conditions and at close range. Eight registration methods were tested on nadir wheat images acquired by a pair of red, green and blue (RGB) cameras, a thermal camera and a multispectral camera array. The most accurate method, relying on a local transformation, aligned the images with an average error of 2 mm but was not reliable for thermal images. More generally, the suggested registration method and the preprocesses necessary before fusion (plant mask erosion, pixel intensity averaging) would depend on the application. As a consequence, the main output of this study was to identify four registration-fusion strategies: (i) the REAL-TIME strategy solely based on the cameras’ positions, (ii) the FAST strategy suitable for all types of images tested, (iii) and (iv) the ACCURATE and HIGHLY ACCURATE strategies handling local distortion but unable to deal with images of very different natures. These suggestions are, however, limited to the methods compared in this study. Further research should investigate how recent cutting-edge registration methods would perform on the specific case of wheat canopy.


Sign in / Sign up

Export Citation Format

Share Document