scholarly journals Perona-Malik Model with Diffusion Coefficient Depending on Fractional Gradient via Caputo-Fabrizio Derivative

2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Gustavo Asumu Mboro Nchama ◽  
Angela Leon Mecias ◽  
Mariano Rodriguez Ricard

The Perona-Malik (PM) model is used successfully in image processing to eliminate noise while preserving edges; however, this model has a major drawback: it tends to make the image look blocky. This work proposes to modify the PM model by introducing the Caputo-Fabrizio fractional gradient inside the diffusivity function. Experiments with natural images show that our model can suppress efficiently the blocky effect. Also, our model has good performance in visual quality, high peak signal-to-noise ratio (PSNR), and lower value of mean absolute error (MAE) and mean square error (MSE).

2021 ◽  
pp. 198-206
Author(s):  
Sami Hasan ◽  
Shereen S. Jumaa

The main targets for using the edge detection techniques in image processing are to reduce the number of features and find the edge of image based-contents. In this paper, comparisons have been demonstrated between classical methods (Canny, Sobel, Roberts, and Prewitt) and Fuzzy Logic Technique to detect the edges of different samples of image's contents and patterns. These methods are tested to detect edges of images that are corrupted with different types of noise such as (Gaussian, and Salt and pepper). The performance indices are mean square error and peak signal to noise ratio (MSE and PSNR). Finally, experimental results show that the proposed Fuzzy rules and membership function provide better results for both noisy and noise-free images.


Author(s):  
Samsul Ariffin Abdul Karim ◽  
Nur Atiqah Binti Zulkifli ◽  
A'fza Binti Shafie ◽  
Muhammad Sarfraz ◽  
Abdul Ghaffar ◽  
...  

This chapter deals with image processing in the specific area of image zooming via interpolation. The authors employ bivariate rational cubic ball function defined on rectangular meshes. These bivariate spline have six free parameters that can be used to alter the shape of the surface without needed to change the data. It also can be used to refine the resolution of the image. In order to cater the image zomming, they propose an efficient algorithm by including image downscaling and upscaling procedures. To measure the effectiveness of the proposed scheme, they compare the performance based on the value of peak signal-to-noise ratio (PSNR) and root mean square error (RMSE). Comparison with existing schemes such as nearest neighbour (NN), bilinear (BL), bicubic (BC), bicubic Hermite (BH), and existing scheme Karim and Saaban (KS) have been made in detail. From all numerical results, the proposed scheme gave higher PSNR value and smaller RMSE value for all tested images.


Author(s):  
Chanintorn Jittawiriyanukoon

<span>The bulk noise has been provoking a contributed data due to a communication network with a tremendously low signal to noise ratio. An appreciated method for revising massive noise of individuals through information theory is widely discussed. One of the practical applications of this approach for bulk noise estimation is analyzed using intelligent automation and machine learning tools, dealing the case of bulk noise existence or nonexistence. A regression-based model is employed for the investigation and experiment. Estimation for the practical case with bulk noisy datasets is proposed. The proposed method applies slice-and-dice technique to partition a body of datasets down into slighter portions so that it can be carried out. The average error, correlation, absolute error and mean square error are computed to validate the estimation. Results from massive online analysis will be verified with data collected in the following period. In many cases, the prediction with bulk noisy data through MOA simulation reveals Random Imputation minimizes the average error.</span>


2020 ◽  
Vol 13 (40) ◽  
pp. 4275-4286
Author(s):  
GC Suguna

Background/Objectives: Denoising of the wrist pulse is a significant preprocessing stage for accurate investigation of the disease. The objective is to improve and analyze performance metrics of denoising techniques. Methods/Statistical analysis: Denoising of wrist pulse with the evaluation parameters such as PSNR, SNR, AE and RMSE has been implemented using wavelets such as Daubechies, Symlet and Biorthogonal. The performance of wavelets depends on the choice of decomposition level N and thresholding techniques. Findings: Variance thresholding technique showed significant improvement in Peak Signal to Noise Ratio (PSNR), Signal to Noise Ratio (SNR) and reduction in Absolute Error (AE) and Root Mean Square Error (RMSE) compared to other thresholding methods. Novelty/Applications: Experimental results showed drastic improvement in PSNR and SNR retaining the pathophysiological information of the wrist pulse signal for future analysis.


2019 ◽  
Vol 92 (1100) ◽  
pp. 20190067 ◽  
Author(s):  
Yingzi Liu ◽  
Yang Lei ◽  
Tonghe Wang ◽  
Oluwatosin Kayode ◽  
Sibo Tian ◽  
...  

Objective: The purpose of this work is to develop and validate a learning-based method to derive electron density from routine anatomical MRI for potential MRI-based SBRT treatment planning. Methods: We proposed to integrate dense block into cycle generative adversarial network (GAN) to effectively capture the relationship between the CT and MRI for CT synthesis. A cohort of 21 patients with co-registered CT and MR pairs were used to evaluate our proposed method by the leave-one-out cross-validation. Mean absolute error, peak signal-to-noise ratio and normalized cross-correlation were used to quantify the imaging differences between the synthetic CT (sCT) and CT. The accuracy of Hounsfield unit (HU) values in sCT for dose calculation was evaluated by comparing the dose distribution in sCT-based and CT-based treatment planning. Clinically relevant dose–volume histogram metrics were then extracted from the sCT-based and CT-based plans for quantitative comparison. Results: The mean absolute error, peak signal-to-noise ratio and normalized cross-correlation of the sCT were 72.87 ± 18.16 HU, 22.65 ± 3.63 dB and 0.92 ± 0.04, respectively. No significant differences were observed in the majority of the planning target volume and organ at risk dose–volume histogram metrics ( p  >  0.05). The average pass rate of γ analysis was over 99% with 1%/1 mm acceptance criteria on the coronal plane that intersects with isocenter. Conclusion: The image similarity and dosimetric agreement between sCT and original CT warrant further development of an MRI-only workflow for liver stereotactic body radiation therapy. Advances in knowledge: This work is the first deep-learning-based approach to generating abdominal sCT through dense-cycle-GAN. This method can successfully generate the small bony structures such as the rib bones and is able to predict the HU values for dose calculation with comparable accuracy to reference CT images.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 717
Author(s):  
Mariia Nazarkevych ◽  
Natalia Kryvinska ◽  
Yaroslav Voznyi

This article presents a new method of image filtering based on a new kind of image processing transformation, particularly the wavelet-Ateb–Gabor transformation, that is a wider basis for Gabor functions. Ateb functions are symmetric functions. The developed type of filtering makes it possible to perform image transformation and to obtain better biometric image recognition results than traditional filters allow. These results are possible due to the construction of various forms and sizes of the curves of the developed functions. Further, the wavelet transformation of Gabor filtering is investigated, and the time spent by the system on the operation is substantiated. The filtration is based on the images taken from NIST Special Database 302, that is publicly available. The reliability of the proposed method of wavelet-Ateb–Gabor filtering is proved by calculating and comparing the values of peak signal-to-noise ratio (PSNR) and mean square error (MSE) between two biometric images, one of which is filtered by the developed filtration method, and the other by the Gabor filter. The time characteristics of this filtering process are studied as well.


2021 ◽  
pp. 875697282199994
Author(s):  
Joseph F. Hair ◽  
Marko Sarstedt

Most project management research focuses almost exclusively on explanatory analyses. Evaluation of the explanatory power of statistical models is generally based on F-type statistics and the R 2 metric, followed by an assessment of the model parameters (e.g., beta coefficients) in terms of their significance, size, and direction. However, these measures are not indicative of a model’s predictive power, which is central for deriving managerial recommendations. We recommend that project management researchers routinely use additional metrics, such as the mean absolute error or the root mean square error, to accurately quantify their statistical models’ predictive power.


2020 ◽  
Vol 11 (1) ◽  
pp. 39
Author(s):  
Eric Järpe ◽  
Mattias Weckstén

A new method for musical steganography for the MIDI format is presented. The MIDI standard is a user-friendly music technology protocol that is frequently deployed by composers of different levels of ambition. There is to the author’s knowledge no fully implemented and rigorously specified, publicly available method for MIDI steganography. The goal of this study, however, is to investigate how a novel MIDI steganography algorithm can be implemented by manipulation of the velocity attribute subject to restrictions of capacity and security. Many of today’s MIDI steganography methods—less rigorously described in the literature—fail to be resilient to steganalysis. Traces (such as artefacts in the MIDI code which would not occur by the mere generation of MIDI music: MIDI file size inflation, radical changes in mean absolute error or peak signal-to-noise ratio of certain kinds of MIDI events or even audible effects in the stego MIDI file) that could catch the eye of a scrutinizing steganalyst are side-effects of many current methods described in the literature. This steganalysis resilience is an imperative property of the steganography method. However, by restricting the carrier MIDI files to classical organ and harpsichord pieces, the problem of velocities following the mood of the music can be avoided. The proposed method, called Velody 2, is found to be on par with or better than the cutting edge alternative methods regarding capacity and inflation while still possessing a better resilience against steganalysis. An audibility test was conducted to check that there are no signs of audible traces in the stego MIDI files.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ayoung Choi ◽  
Hyunggoo Kwon ◽  
Sohee Jeon

AbstractThe accuracy of intraocular lens (IOL) calculations is suboptimal for long or short eyes, which results in a low visual quality after multifocal IOL implantation. The purpose of the present study is to evaluate the accuracy of IOL formulas (Barrett Universal II, SRK/T, Holladay 1, Hoffer Q, and Haigis) for the Acrysof IQ Panoptix TFNT IOL (Alcon Laboratories, Inc, Fort Worth, Texas, United States) implantation based on the axial length (AXL) from a large cohort of 2018 cases and identify the factors that are associated with a high mean absolute error (MAE). The Barrett Universal II showed the lowest MAE in the normal AXL group (0.30 ± 0.23), whereas the Holladay 1 and Hoffer Q showed the lowest MAE in the short AXL group (0.32 ± 0.22 D and 0.32 ± 0.21 D, respectively). The Haigis showed the lowest MAE in the long AXL group (0.24 ± 0.19 D). The Barrett Universal II did not perform well in short AXL eyes with higher astigmatism (P = 0.013), wider white-to-white (WTW; P < 0.001), and shorter AXL (P = 0.016). Study results suggest that the Barrett Universal II performed best for the TFNT IOL in the overall study population, except for the eyes with short AXL, particularly when the eyes had higher astigmatism, wider WTW, and shorter AXL.


2020 ◽  
Vol 4 (2) ◽  
pp. 53-60
Author(s):  
Latifah Listyalina ◽  
Yudianingsih Yudianingsih ◽  
Dhimas Arief Dharmawan

Image processing is a technical term useful for modifying images in various ways. In medicine, image processing has a vital role. One example of images in the medical world, namely retinal images, can be obtained from a fundus camera. The retina image is useful in the detection of diabetic retinopathy. In general, direct observation of diabetic retinopathy is conducted by a doctor on the retinal image. The weakness of this method is the slow handling of the disease. For this reason, a computer system is required to help doctors detect diabetes retinopathy quickly and accurately. This system involves a series of digital image processing techniques that can process retinal images into good quality images. In this research, a method to improve the quality of retinal images was designed by comparing the methods for adjusting histogram equalization, contrast stretching, and increasing brightness. The performance of the three methods was evaluated using Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), and Signal to Noise Ratio (SNR). Low MSE values and high PSNR and SNR values indicated that the image had good quality. The results of the study revealed that the image was the best to use, as evidenced by the lowest MSE values and the highest SNR and PSNR values compared to other techniques. It indicated that adaptive histogram equalization techniques could improve image quality while maintaining its information.


Sign in / Sign up

Export Citation Format

Share Document