scholarly journals An Efficient Noise Separation Technique for Removal of Gaussian and Mixed Noises in Monochrome and Color Images

Images are often affected by different kinds of noise while acquiring, storing and transmitting it. Even the datasets gathered by the various image acquiring devices would be contaminated by noise. Hence, there is a need for noise reduction in the image, often called Image De-noising and thereby it becomes the significant concerns and fundamental step in the area of image processing. During image de-noising, the big challenge before the researchers is removing noise from the original image in such a way that most significant properties like edges, lines, etc., of the image, should be preserved. There were various published algorithms and techniques to de-noise the image and every single approach has its own limitations, benefits, and assumptions. This paper reviews the noise models and presents a comparative analysis of various de-noising filters that works for color images with single and mixed noises. It also suggests the best filter for color that involve in producing a high-quality color image. The metrics like PSNR, Entropy, SSIM, MSE, FSIM, and EPI are considered as image quality assessment metric

In many image processing applications, a wide range of image enhancement techniques are being proposed. Many of these techniques demanda lot of critical and advance steps, but the resultingimage perception is not satisfactory. This paper proposes a novel sharpening method which is being experimented with additional steps. In the first step, the color image is transformed into grayscale image, then edge detection process is applied using Laplacian technique. Then deduct this image from the original image. The resulting image is as expected; After performing the enhancement process,the high quality of the image can be indicated using the Tenengrad criterion. The resulting image manifested the difference in certain areas, the dimension and the depth as well. Histogram equalization technique can also be applied to change the images color.


Connectivity ◽  
2020 ◽  
Vol 148 (6) ◽  
Author(s):  
V. V. Grebenyuk ◽  
◽  
O. A. Dibrivnyy ◽  
O. V. Nehodenko

A comparative analysis of functions to assess image quality in the absence of a sample: no-reference (NR) measure or NR-type methods. The availability of NR-methods is very important for assessing the quality of streaming video such as television, game streaming, online conferences, web-chatting, etc. (because on the side of the recipient of the video there is no standard for quality comparison) and assessing the results of transformations aimed at improving video, and choosing the parameters of these transformations (brightness change, semitone and others). The human visual system (HVS) is able to visually assessing video quality, but If required to visually assess the quality of dozens or hundreds of videos or ranking them by quality level it will be needed a huge amount of time. Six types of experiments were performed to analyze the correlation of calculated quantitative estimates with visual assessments of the quality of the tested video files. Three of them are fundamentally new: comparing video after gamma correction and changing the contrast with different parameters, as well as blurring, which may be the result of defocusing the camcorder. A hybrid method (or reduced-reference (RR) measure) and a full-reference (FR) measure or FR-type method were also added for comparison. It has been experimentally shown that none of the studied non-reference methods of image quality assessment is universal, and the calculated assessment cannot be converted into a quality scale without taking into account the factors influencing the distortion of image quality. Moreover, all NR-type methods could not cope with the experiment of changing the contrast, believing that the best result is the most contrasting image but the original. Instead, the reference methods showed an excellent result (except one, which showed partial ineffectiveness). Also, it has been shown performance comparison between methods. It is shown that most of the studied methods calculate local estimates for each frame, and their arithmetic mean value is an estimate of the quality of the entire video file. If the video is dominated by large areas of uniform evaluation, methods of this type may give incorrect quality evaluations that do not coincide with the visual evaluations.


2019 ◽  
Vol 11 (4) ◽  
pp. 28-49 ◽  
Author(s):  
Mengmeng Zhang ◽  
Rongrong Ni ◽  
Yao Zhao

A blind print-recapture robust watermark scheme is proposed. Watermark patterns are embedded into the space domain of a color image and can be detected from a print-recaptured version of the image without knowledge of the original image. The process of embedding invisible watermarks to convert RGB color images to CIE Lab color spaces and embed periodic watermarks in both color channels at the same time. Watermark extraction is achieved by calculating self-convolution and inverting the geometric transformation such as rotation and scale. Normalized correlation coefficients between the extracted and the embedded watermark pattern is calculated to determine whether there is watermark. The decision about the presence/absence of the watermark pattern is then determined by a threshold which is set 0.13, and the detection rate of 241 pictures is about 0.79.


2013 ◽  
Vol 303-306 ◽  
pp. 1489-1493
Author(s):  
Zhong Sheng Li ◽  
Tong Cheng Huang ◽  
Niu Li ◽  
Ze Su Cai

It’s a new idea to make computers be able to obtain “sensations” from a color image through some unsupervised ways. To let the idea come into true, a granule-based model, based on granular computing(GrC) which is a new way to simulate human thinking to help solve complicated problems in the field of computational intelligence, is proposed for color image processing. First, this paper deems data a hypercube, defines two new concepts, attribute granules(AtG) and connected granules(CoG), and presents the definitions of the granule-based model. Then, in order to fulfill the granule-based model, this paper designs a single attribute analyser(SAA), defines some theorems and lemmas related to decomposition, and describes the processing of extracting all attibute granules. Experimental results on over 300 color images show that the proposed analyser is accurate, robust, high-speed, and able to provide computers with “sensations”.


Sign in / Sign up

Export Citation Format

Share Document