scholarly journals Analysis of Medical Image Resizing Using Bicubic Interpolation Algorithm

2021 ◽  
Vol 14 (1) ◽  
pp. 20
Author(s):  
Bambang Krismono Triwijoyo ◽  
Ahmat Adil

Image interpolation is the most basic requirement for many image processing tasks such as medical image processing. Image interpolation is a technique used in resizing an image. To change the image size, each pixel in the new image must be remapped to a location in the old image to calculate the new pixel value. There are many algorithms available for determining the new pixel value, most of which involve some form of interpolation between the closest pixels in the old image. In this paper, we use the Bicubic interpolation algorithm to change the size of medical images from the Messidor dataset and then analyze it by measuring it using three parameters Mean Square Error (MSE), Root Mean Squared Error (RMSE), and Peak Signal-to-Noise Ratio (PSNR), and compare the results with Bilinear and Nearest-neighbor algorithms. The results showed that the Bicubic algorithm is better than Bilinear and Nearest-neighbor and the larger the image dimensions are resized, the higher the degree of similarity to the original image, but the level of computation complexity also increases.

Author(s):  
B.A. Nurul Nadiyya ◽  
Koredianto Usman ◽  
Suci Aulia ◽  
B.C. Erizka

In the medical world, a digital medical image is a requirement for image sharing in which the confidential data of the patient should be protected from unauthorized access. This study proposes a technique that can preserve image confidentiality using image encryption. This approach converts the original image into another shape that can not be visually interpreted, so unauthorized parties can not see an image's substance. This research proposes a method of X-Ray images encryption based on Arnold's Cat Map and Bose Chaudhuri Hocquenghem by shuffling coordinates from the original pixel into new coordinates. The Bose Chaudhuri Hocquenghem encoding scheme strengthens Arnold's cat map encryption by detecting and fixing bits of an image pixel value error. This study comprises results checked by giving the X-Ray or rontgen image noise with distinct variances. These algorithms are supposed to provide decrypted images with high accuracy and are more resistant to attack. Our result showed that the system using Bose Chaudhuri Hocquenghem codes has a better Peak Signal-to-Noise Ratio result equal to infinity and Bit Error Rate, equivalent to 0 at a more significant variance of each form of noise than the process using Arnold's Cat Map codes only. The Brute Force Attack for Bose Chaudhuri Hocquenghem takes 2.86 × 1058 years, while Arnold's Cat Map takes 3.9 × 1011 years, so the Bose Chaudhuri Hocquenghem code is more resistant to Brute Force Attack than the Arnold's Cat Map method.


Author(s):  
Calvin Omind Munna

Currently, there a growing demand of data produced and stored in clinical domains. Therefore, for effective dealings of massive sets of data, a fusion methodology needs to be analyzed by considering the algorithmic complexities. For effective minimization of the severance of image content, hence minimizing the capacity to store and communicate data in optimal forms, image processing methodology has to be involved. In that case, in this research, two compression methodologies: lossy compression and lossless compression were utilized for the purpose of compressing images, which maintains the quality of images. Also, a number of sophisticated approaches to enhance the quality of the fused images have been applied. The methodologies have been assessed and various fusion findings have been presented. Lastly, performance parameters were obtained and evaluated with respect to sophisticated approaches. Structure Similarity Index Metric (SSIM), Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR) are the metrics, which were utilized for the sample clinical pictures. Critical analysis of the measurement parameters shows higher efficiency compared to numerous image processing methods. This research draws understanding to these approaches and enables scientists to choose effective methodologies of a particular application.


2018 ◽  
Vol 5 ◽  
pp. 23-33
Author(s):  
Reena Manandhar ◽  
Sanjeeb Prashad Pandey

One of the most important areas in image processing is medical image processing where the quality of the images has become an important issue. Most of the medical images are corrupted with the visual noise, and one of the such images is echocardiography image where this effect is more. So, this research aims to denoise the echocardiography image with fractal wavelet transform and to compare its performance with other wavelet based algorithm like hard thresholding, soft thresholding and wiener filter. Initially, the image is corrupted by the Gaussian noise with varying noise variances and is denoised using above mentioned different wavelet based denoising techniques. On comparison of the obtained results, it is observed that the fractal wavelet transform is well suited for highly degraded echocardiography images in terms of Mean Square Error (MSE) and Peak Signal To Noise Ratio (PSNR) than other wavelet based denoising methods. Further, the work could be enhanced to denoise the echocardiography image corrupted by other different types of noise. This research is limited to denoise the echocardiography image corrupted with Gaussian noise only.


2006 ◽  
Vol 03 (02) ◽  
pp. 139-159 ◽  
Author(s):  
S. E. EL-KHAMY ◽  
M. M. HADHOUD ◽  
M. I. DESSOUKY ◽  
B. M. SALAM ◽  
F. E. ABD EL-SAMIE

In this paper, an adaptive algorithm is suggested for the implementation of polynomial based image interpolation techniques such as Bilinear, Bicubic, Cubic Spline and Cubic O-MOMS. This algorithm is based on the minimization of the squared estimation error at each pixel in the interpolated image by adaptively estimating the distance of the pixel to be estimated from its neighbors. The adaptation process at each pixel is performed iteratively to yield the best estimate of this pixel value. This adaptive interpolation algorithm takes into consideration the mathematical model by which a low resolution (LR) image is obtained from a high resolution (HR) image. This adaptive algorithm is compared to traditional polynomial based interpolation techniques and to the warped distance interpolation techniques. The performance of this algorithm is also compared to the performance of other algorithms used in commercial interpolation softwares such as the ACDSee and the Photopro programs. Results show that the suggested adaptive algorithm is superior from the Peak Signal to Noise Ratio (PSNR) point of view to other traditional techniques and it has a higher ability of edge preservation than traditional image techniques. The computational cost of the adaptive algorithm is studied and found to be moderate.


Author(s):  
E. Cucchetti ◽  
C. Latry ◽  
G. Blanchet ◽  
J.-M. Delvit ◽  
M. Bruno

Abstract. Over the last decade, the French space agency (CNES) has designed and successfully operated high-resolution satellites such as Pléiades. High-resolution satellites typically acquire panchromatic images with fine spatial resolutions and multispectral images with coarser samplings for downlink constraints. The multispectral image is reconstructed on the ground, using pan-sharpening techniques. Onboard compression and ground processing affect however the quality of the final product. In this paper, we describe our next-generation onboard/on-ground image processing chain for high-resolution satellites. This paper focuses on onboard compression, compression artefacts correction, denoising, deconvolution and pan-sharpening. In the first part, we detail our fixed-quality compression approach, which limits compression effects to a fraction of the noise, thus preserving the useful information in an image. This approach optimises the bitrate at the cost of image size, which depends on the scene complexity. This technique requires however pre- and post-processing steps. The noisy HR images obtained after decompression are suited for non-local denoising algorithms. We show in the second part of this paper that non-local denoising outperforms previous techniques by 15% in terms of root mean-squared error when tested on simulated noiseless references. Deconvolution is also detailed. In the final part of this paper, we put forward an adaptation of this chain to low-cost CMOS Bayer colour matrices. We demonstrate that the concept of our image chain remains valid, provided slight modifications (in particular dedicated transformations of the colour planes and demosaicing). A similar chain is under investigation for future missions.


2019 ◽  
Vol 5 (3) ◽  
pp. 255
Author(s):  
Garno Garno ◽  
Riza Ibnu Adam

Maraknya kasus pencurian data menyebabkan sistem keamanan pesan harus ditingkatkan. Salah satu cara untuk mengamankan pesan adalah dengan memasukkan pesan ke dalam gambar digital. Penelitian ini bertujuan untuk meningkatkan kualitas gambar digital dalam sistem keamanan pesan tersembunyi. Teknik yang digunakan untuk keamanan pesan adalah steganografi. Cover image akan dikonversi menjadi bit piksel dalam domain spasial. Cover image digunakan dalam bentuk gambar digital dengan format .jpg. Teknik meningkatkan kualitas dan kapasitas gambar digital dilakukan dengan menambahkan dan meningkatkan bit piksel menggunakan metode interpolasi Cubik B-Spline. Cover image yang telah di interpolasi, kemudian disisipi pesan menggunakan metode least significant bit (LSB) untuk memperoleh stegoimage. Pesan yang diselipkan berbentuk file .doc, .docx, .pdf, .xls, .rar, .iso dan .zip dengan ukuran berbeda-beda kapasitasnya. Teknik uji dibuat dengan bantuan perangkat lunak MATLAB versi 2017a. Penelitian melakukan uji dengan mengukur nilai kualitas penyamaran dari stegoimage menggunakan Peak Signal to Noise Ratio (PSNR) dengan rata-rata perolehan stegoimage terhadap Original image 29.06 dB dan stegoimage terhadap Image interpolation 64.34 dB dan uji mean squared error (MSE) dengan rata-rata perolehan 97.54 dB pada Image interpolation terhadap original image dan 97.55 dB pada stegoimage terhadap original image, 0.13 dB nilai MSE stegoimage terhadap Image interpolation. Hasil uji pada penelitian dengan proses interpolasi pada coverimage dengan Cubic B-Spline mempengaruhi terhadap nilai samar atau Nilai PSNR.


2014 ◽  
Vol 513-517 ◽  
pp. 3744-3749
Author(s):  
Yue Zhou ◽  
Jia Xin Chen

According to the problem such as blurred border of images and lower efficiency caused by present interpolation methods, an interslice interpolation based on the relativity for medical image is presented in this paper. This algorithm makes good use of voxel relativity and structure relativity and then the different methods are adopted to interpolate the different points, In addition, error checkout is introduced to check the mismatching points.The experiments show that the proposed algorithm has less computational complexity and improves the quality of image, at the same time, the result can be used to 3D reconstruction effectively.


2017 ◽  
Vol 8 (2) ◽  
Author(s):  
Meirista Wulandari

There are a lot of applications of pattern recognition which need input image with a certain size. The size effect the result of pattern recognition. Determining size of image adopts interpolation technique. Interpolated image’s quality depends on interpolation technique. Texture is the main feature which is used in image processing and computer vision to classify object. One of some methods that are used to characterize texture is statistical methods. Statistical methods characterize texture by the statistical distribution of the image density. This research compared 4 interpolation methods (Nearest Neighbor Interpolation, Bilinear Interpolation, Bicubic Interpolation and Nearest Neighbor Value Interpolation) and 6 features of 10 test images. Based on 6 features which are researched, skewness changes upto 800%, energy 90%, entropy 75%, smoothness 18%, standard deviation 10% and mean 0,9%. Index Terms—Interpolation, Statistical feature, NNI, Bilinear Interpolation, NNV, Bicubic Interpolation


2010 ◽  
Vol 3 (1) ◽  
pp. 81 ◽  
Author(s):  
M. A. Yousuf ◽  
M. N. Nobi

In medical image processing, medical images are corrupted by different type of noises. It is very important to obtain precise images to facilitate accurate observations for the given application. Removing of noise from medical images is now a very challenging issue in the field of medical image processing. Most well known noise reduction methods, which are usually based on the local statistics of a medical image, are not efficient for medical image noise reduction. This paper presents an efficient and simple method for noise reduction from medical images. In the proposed method median filter is modified by adding more features. Experimental results are also compared with the other three image filtering algorithms. The quality of the output images is measured by the statistical quantity measures: peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR) and root mean square error (RMSE). Experimental results of magnetic resonance (MR) image and ultrasound image demonstrate that the proposed algorithm is comparable to popular image smoothing algorithms.Key words: Magnetic resonance image; Ultrasound image; PSNR; SNR; RMSE.© 2011 JSR Publications. ISSN: 2070-0237 (Print); 2070-0245 (Online). All rights reserved.doi:10.3329/jsr.v3i1.5544                J. Sci. Res. 3 (1), 81-89 (2011)


2018 ◽  
Vol 16 (04) ◽  
pp. 1850031 ◽  
Author(s):  
Panchi Li ◽  
Xiande Liu

Image scaling is the basic operation that is widely used in classic image processing, including nearest-neighbor interpolation, bilinear interpolation, and bicubic interpolation. In quantum image processing (QIP), the research on image scaling is focused on nearest-neighbor interpolation, while the related research of bilinear interpolation is very rare, and that of bicubic interpolation has not been reported yet. In this study, a new method based on quantum Fourier transform (QFT) is designed for bilinear interpolation of images. Firstly, some basic functional modules are constructed, in which the new method based on QFT is adopted for the design of two core modules (i.e. addition and multiplication), and then these modules are used to design quantum circuits for the bilinear interpolation of images, including scaling-up and down. Finally, the complexity analysis of the scaling circuits based on the elementary gates is deduced. Simulation results show that the scaling image using bilinear interpolation is clearer than that using the nearest-neighbor interpolation.


Sign in / Sign up

Export Citation Format

Share Document