scholarly journals De-Noising of Corrupted Fluoroscopy Images Based on a New Multi-Line Algorithm

2020 ◽  
pp. 2126-2131
Author(s):  
Maytham. A. Ali ◽  
Rohaida Romli

 Fluoroscopic images are a field of medical images that depends on the quality of image for correct diagnosis; the main trouble is the de-nosing and how to keep the poise between degradation of noisy image, from one side, and edge and fine details preservation, from the other side, especially when fluoroscopic images contain black and white type noise with high density. The previous filters could usually handle low/medium black and white type noise densities, that expense edge, =fine details preservation and fail with high density of noise that corrupts the images. Therefore, this paper proposed a new Multi-Line algorithm that deals with high-corrupted image with high density of black and white type noise. The experiments achieved images with a high quality and effectively preserved edge and fine details against black and white type noise densities depending on Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), and Image Enhancement Factor (IEF) measures.

2014 ◽  
Vol 2 (2) ◽  
pp. 47-58
Author(s):  
Ismail Sh. Baqer

A two Level Image Quality enhancement is proposed in this paper. In the first level, Dualistic Sub-Image Histogram Equalization DSIHE method decomposes the original image into two sub-images based on median of original images. The second level deals with spikes shaped noise that may appear in the image after processing. We presents three methods of image enhancement GHE, LHE and proposed DSIHE that improve the visual quality of images. A comparative calculations is being carried out on above mentioned techniques to examine objective and subjective image quality parameters e.g. Peak Signal-to-Noise Ratio PSNR values, entropy H and mean squared error MSE to measure the quality of gray scale enhanced images. For handling gray-level images, convenient Histogram Equalization methods e.g. GHE and LHE tend to change the mean brightness of an image to middle level of the gray-level range limiting their appropriateness for contrast enhancement in consumer electronics such as TV monitors. The DSIHE methods seem to overcome this disadvantage as they tend to preserve both, the brightness and contrast enhancement. Experimental results show that the proposed technique gives better results in terms of Discrete Entropy, Signal to Noise ratio and Mean Squared Error values than the Global and Local histogram-based equalization methods


Author(s):  
Calvin Omind Munna

Currently, there a growing demand of data produced and stored in clinical domains. Therefore, for effective dealings of massive sets of data, a fusion methodology needs to be analyzed by considering the algorithmic complexities. For effective minimization of the severance of image content, hence minimizing the capacity to store and communicate data in optimal forms, image processing methodology has to be involved. In that case, in this research, two compression methodologies: lossy compression and lossless compression were utilized for the purpose of compressing images, which maintains the quality of images. Also, a number of sophisticated approaches to enhance the quality of the fused images have been applied. The methodologies have been assessed and various fusion findings have been presented. Lastly, performance parameters were obtained and evaluated with respect to sophisticated approaches. Structure Similarity Index Metric (SSIM), Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR) are the metrics, which were utilized for the sample clinical pictures. Critical analysis of the measurement parameters shows higher efficiency compared to numerous image processing methods. This research draws understanding to these approaches and enables scientists to choose effective methodologies of a particular application.


2002 ◽  
Vol 27 (3) ◽  
pp. 255-270 ◽  
Author(s):  
J.R. Lockwood ◽  
Thomas A. Louis ◽  
Daniel F. McCaffrey

Accountability for public education often requires estimating and ranking the quality of individual teachers or schools on the basis of student test scores. Although the properties of estimators of teacher-or-school effects are well established, less is known about the properties of rank estimators. We investigate performance of rank (percentile) estimators in a basic, two-stage hierarchical model capturing the essential features of the more complicated models that are commonly used to estimate effects. We use simulation to study mean squared error (MSE) performance of percentile estimates and to find the operating characteristics of decision rules based on estimated percentiles. Each depends on the signal-to-noise ratio (the ratio of the teacher or school variance component to the variance of the direct, teacher- or school-specific estimator) and only moderately on the number of teachers or schools. Results show that even when using optimal procedures, MSE is large for the commonly encountered variance ratios, with an unrealistically large ratio required for ideal performance. Percentile-specific MSE results reveal interesting interactions between variance ratios and estimators, especially for extreme percentiles, which are of considerable practical import. These interactions are apparent in the performance of decision rules for the identification of extreme percentiles, underscoring the statistical and practical complexity of the multiple goal inferences faced in value-added modeling. Our results highlight the need to assess whether even optimal percentile estimators perform sufficiently well to be used in evaluating teachers or schools.


2019 ◽  
Vol 2 (3) ◽  
pp. 1189-1195
Author(s):  
Omar Abdulwahhab Othman ◽  
Sait Ali Uymaz ◽  
Betül Uzbaş

In this paper, automatic black and white image colorization method has been proposed. The study is based on the best-known deep learning algorithm CNN (Convolutional neural network). The Model that developed taking the input in gray scale and predict the color of image based on the dataset that trained on it. The color space used in this work is Lab Color space the model takes the L channel as the input and the ab channels as the output. The Image Net dataset used and random selected image have been used to construct a mini dataset of images that contains 39,604 images splitted into 80% training and 20% testing. The proposed method has been tested and evaluated on samples images with Mean-squared error and peak signal to noise ratio and reached an average of MSE= 51.36 and PSNR= 31.


2018 ◽  
pp. 1940-1954
Author(s):  
Suma K. V. ◽  
Bheemsain Rao

Reduction in the capillary density in the nailfold region is frequently observed in patients suffering from Hypertension (Feng J, 2010). Loss of capillaries results in avascular regions which have been well characterized in many diseases (Mariusz, 2009). Nailfold capillary images need to be pre-processed so that noise can be removed, background can be separated and the useful parameters may be computed using image processing algorithms. Smoothing filters such as Gaussian, Median and Adaptive Median filters are compared using Mean Squared Error and Peak Signal-to-Noise Ratio. Otsu's thresholding is employed for segmentation. Connected Component Labeling algorithm is applied to calculate the number of capillaries per mm. This capillary density is used to identify rarefaction of capillaries and also the severity of rarefaction. Avascular region is detected by determining the distance between the peaks of the capillaries using Euclidian distance. Detection of rarefaction of capillaries and avascular regions can be used as a diagnostic tool for Hypertension and various other diseases.


2016 ◽  
Vol 5 (2) ◽  
pp. 73-86
Author(s):  
Suma K. V. ◽  
Bheemsain Rao

Reduction in the capillary density in the nailfold region is frequently observed in patients suffering from Hypertension (Feng J, 2010). Loss of capillaries results in avascular regions which have been well characterized in many diseases (Mariusz, 2009). Nailfold capillary images need to be pre-processed so that noise can be removed, background can be separated and the useful parameters may be computed using image processing algorithms. Smoothing filters such as Gaussian, Median and Adaptive Median filters are compared using Mean Squared Error and Peak Signal-to-Noise Ratio. Otsu's thresholding is employed for segmentation. Connected Component Labeling algorithm is applied to calculate the number of capillaries per mm. This capillary density is used to identify rarefaction of capillaries and also the severity of rarefaction. Avascular region is detected by determining the distance between the peaks of the capillaries using Euclidian distance. Detection of rarefaction of capillaries and avascular regions can be used as a diagnostic tool for Hypertension and various other diseases.


2015 ◽  
Vol 8 (4) ◽  
pp. 32
Author(s):  
Sabarish Sridhar

Steganography, water marking and encryption are widely used in image processing and communication. A general practice is to use them independently or in combination of two - for e.g. data hiding with encryption or steganography alone. This paper aims to combine the features of watermarking, image encryption as well as image steganography to provide reliable and secure data transmission .The basics of data hiding and encryption are explained. The first step involves inserting the required watermark on the image at the optimum bit plane. The second step is to use an RSA hash to actually encrypt the image. The final step involves obtaining a cover image and hiding the encrypted image within this cover image. A set of metrics will be used for evaluation of the effectiveness of the digital water marking. The list includes Mean Squared Error, Peak Signal to Noise Ratio and Feature Similarity.


2020 ◽  
Vol 10 (24) ◽  
pp. 8904
Author(s):  
Ana Isabel Montoya-Munoz ◽  
Oscar Mauricio Caicedo Rendon

The reliability in data collection is essential in Smart Farming supported by the Internet of Things (IoT). Several IoT and Fog-based works consider the reliability concept, but they fall short in providing a network’s edge mechanisms for detecting and replacing outliers. Making decisions based on inaccurate data can diminish the quality of crops and, consequently, lose money. This paper proposes an approach for providing reliable data collection, which focuses on outlier detection and treatment in IoT-based Smart Farming. Our proposal includes an architecture based on the continuum IoT-Fog-Cloud, which incorporates a mechanism based on Machine Learning to detect outliers and another based on interpolation for inferring data intended to replace outliers. We located the data cleaning at the Fog to Smart Farming applications functioning in the farm operate with reliable data. We evaluate our approach by carrying out a case study in a network based on the proposed architecture and deployed at a Colombian Coffee Smart Farm. Results show our mechanisms achieve high Accuracy, Precision, and Recall as well as low False Alarm Rate and Root Mean Squared Error when detecting and replacing outliers with inferred data. Considering the obtained results, we conclude that our approach provides reliable data collection in Smart Farming.


2007 ◽  
Vol 89 (3) ◽  
pp. 135-153 ◽  
Author(s):  
JINLIANG WANG

SummaryKnowledge of the genetic relatedness among individuals is essential in diverse research areas such as behavioural ecology, conservation biology, quantitative genetics and forensics. How to estimate relatedness accurately from genetic marker information has been explored recently by many methodological studies. In this investigation I propose a new likelihood method that uses the genotypes of a triad of individuals in estimating pairwise relatedness (r). The idea is to use a third individual as a control (reference) in estimating the r between two other individuals, thus reducing the chance of genes identical in state being mistakenly inferred as identical by descent. The new method allows for inbreeding and accounts for genotype errors in data. Analyses of both simulated and human microsatellite and SNP datasets show that the quality of r estimates (measured by the root mean squared error, RMSE) is generally improved substantially by the new triadic likelihood method (TL) over the dyadic likelihood method and five moment estimators. Simulations also show that genotyping errors/mutations, when ignored, result in underestimates of r for related dyads, and that incorporating a model of typing errors in the TL method improves r estimates for highly related dyads but impairs those for loosely related or unrelated dyads. The effects of inbreeding were also investigated through simulations. It is concluded that, because most dyads in a natural population are unrelated or only loosely related, the overall performance of the new triadic likelihood method is the best, offering r estimates with a RMSE that is substantially smaller than the five commonly used moment estimators and the dyadic likelihood method.


Author(s):  
SONALI R. MAHAKALE ◽  
NILESHSINGH V. THAKUR

This paper deals with the comparative study of research work done in the field of Image Filtering. Different noises can affect the image in different ways. Although various solutions are available for denoising them, a detail study of the research is required in order to design a filter which will fulfill the desire aspects along with handling most of the image filtering issues. An output image should be judged on the basis of Image Quality Metrics for ex-: Peak-Signal-to-Noise ratio (PSNR), Mean Squared Error (MSE) and Mean Absolute Error (MAE) and Execution Time.


Sign in / Sign up

Export Citation Format

Share Document