Modified method of image histogram hyperbolization

2021 ◽  
Vol 2021 (49) ◽  
pp. 52-56
Author(s):  
R. A. Vorobel ◽  
◽  
O. R. Berehulyak ◽  
I. B. Ivasenko ◽  
T. S. Mandziy ◽  
...  

One of the methods to improve image quality, which consists in increasing the resolution of image details by contrast enhancement, is to hyperbolize the image histogram. Herewith this increase in local contrast is carried out indirectly. It is due to the nature of the change in the histogram of the transformed image. Usually the histogram of the input image is transformed so that it has a uniform distribution, which illustrates the same contribution of pixels gray level to the image structure. However, there is a method that is based on modeling the human visual system, which is characterized by the logarithmic dependence of the human reaction to light stimulation. It consists in the hyperbolic transformation of the histogram of the image. Then, due to its perception by the visual system, at its output, during the psychophysical perception of the image, an approximately uniform distribution of the histogram of the levels of gray pixels is formed. But the drawback is the lack of effectiveness of this approach for excessively light or dark images. The modified method of image histogram hyperbolization has been developed. It is based on the power transformation of the probability distribution function, which in the discrete version of the images is approximated by a normalized cumulative histogram. The power index is a control parameter of the transformation. to improve the darkened images we use the value of the control parameter less than one, and for light images more than one. The effectiveness of the proposed method is shown by examples.

2019 ◽  
Vol 2019 (1) ◽  
pp. 256-261
Author(s):  
Jake McVey ◽  
Graham Finlayson.

Contrast Limited Histogram Equalisation moves the input image histogram gently towards one which has a more uniform distribution. Viewed as a tone mapping operation, CLHE generates a tone curve with bounded max and min slopes. It is this boundedness which ensures that the processed images have more detail but few artefacts. Outside of limiting contrast, recent improvements to histogram equalisation include constraining the tone curve to make good whites and blacks and constraining the tone curve to be smooth. This paper makes three contributions. First, we show that the CLHE formalism is not least-squares optimal but optimality can be achieved by reformulating the problem in a quadratic programming framework. Second, we incorporate the additional constraints of tone curve smoothness and good whites and blacks in our quadratic programming CLHE framework. Third, experiments demonstrate the utility of our method.


2017 ◽  
Author(s):  
Elham Shahab ◽  
Hadi Abdolrahimpour

Secret sharing approach and in particular Visual Cryptography (VC) try to address the security issues in dealing with images. In fact, VC is a powerful technique that combines the notions of perfect ciphers and secret sharing in cryptography. VC takes an image (secret) as an input and encrypts (divide) into two or more pieces (shares) that each of them can not reveal any information about the main input. The decryption way in this scenario is done through superimposing shares on top of each other to receive the input image. No computer participation is required, thus showing one of the distinguishing features of VC. It is claimed that VC is a unique technique in the sense that the encrypted message can be decrypted directly by the human visual system.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Marwan Ali Albahar

Many hardware and software advancements have been made to improve image quality in smartphones, but unsuitable lighting conditions are still a significant impediment to image quality. To counter this problem, we present an image enhancement pipeline comprising synthetic multi-image exposure fusion and contrast enhancement robust to different lighting conditions. In this paper, we propose a novel technique of generating synthetic multi-exposure images by applying gamma correction to an input image using different values according to its luminosity for generating multiple intermediate images, which are then transformed into a final synthetic image by applying contrast enhancement. We observed that our proposed contrast enhancement technique focuses on specific regions of an image resulting in varying exposure, colors, and details for generating synthetic images. Visual and statistical analysis shows that our method performs better in various lighting scenarios and achieves better statistical naturalness and discrete entropy scores than state-of-the-art methods.


2020 ◽  
Author(s):  
Alejandro Lerer ◽  
Hans Supèr ◽  
Matthias S.Keil

AbstractThe visual system is highly sensitive to spatial context for encoding luminance patterns. Context sensitivity inspired the proposal of many neural mechanisms for explaining the perception of luminance (brightness). Here we propose a novel computational model for estimating the brightness of many visual illusions. We hypothesize that many aspects of brightness can be explained by a predictive coding mechanism, which reduces the redundancy in edge representations on the one hand, while non-redundant activity is enhanced on the other (response equalization). Response equalization is implemented with a dynamic filtering process, which (dynamically) adapts to each input image. Dynamic filtering is applied to the responses of complex cells in order to build a gain control map. The gain control map then acts on simple cell responses before they are used to create a brightness map via activity propagation. Our approach is successful in predicting many challenging visual illusions, including contrast effects, assimilation, and reverse contrast.Author summaryWe hardly notice that what we see is often different from the physical world “outside” of the brain. This means that the visual experience that the brain actively constructs may be different from the actual physical properties of objects in the world. In this work, we propose a hypothesis about how the visual system of the brain may construct a representation for achromatic images. Since this process is not unambiguous, sometimes we notice “errors” in our perception, which cause visual illusions. The challenge for theorists, therefore, is to propose computational principles that recreate a large number of visual illusions and to explain why they occur. Notably, our proposed mechanism explains a broader set of visual illusions than any previously published proposal. We achieved this by trying to suppress predictable information. For example, if an image contained repetitive structures, then these structures are predictable and would be suppressed. In this way, non-predictable structures stand out. Predictive coding mechanisms act as early as in the retina (which enhances luminance changes but suppresses uniform regions of luminance), and our computational model holds that this principle also acts at the next stage in the visual system, where representations of perceived luminance (brightness) are created.


2015 ◽  
Vol 20 (1) ◽  
pp. 015005 ◽  
Author(s):  
Huihui Wang ◽  
Raymond H. Cuijpers ◽  
Ming Ronnier Luo ◽  
Ingrid Heynderickx ◽  
Zhenrong Zheng

Perception ◽  
2019 ◽  
Vol 49 (1) ◽  
pp. 3-20
Author(s):  
Kei Kanari ◽  
Hirohiko Kaneko

We examined whether lightness is determined based on the experience of the relationship between a scene’s illumination and its spatial structure in actual environments. For this purpose, we measured some characteristics of scene structure and the illuminance in actual scenes and found some correlations between them. In the psychophysical experiments, a random-dot stereogram consisting of dots with uniform distribution was used to eliminate the effects of local luminance and texture contrasts. Participants matched the lightness of a presented target patch in the stimulus space to that of a comparison patch by adjusting the latter’s luminance. Results showed that the matched luminance tended to increase when the target patch was interpreted as receiving weak illumination in some conditions. These results suggest that the visual system can probably infer a scene’s illumination from a spatial structure without luminance distribution information under an illumination–spatial structure relation.


2012 ◽  
Vol 2012 ◽  
pp. 1-11 ◽  
Author(s):  
Nicolas Robitaille ◽  
Abderazzak Mouiha ◽  
Burt Crépeault ◽  
Fernando Valdivia ◽  
Simon Duchesne ◽  
...  

Intensity standardization in MRI aims at correcting scanner-dependent intensity variations. Existing simple and robust techniques aim at matching the input image histogram onto a standard, while we think that standardization should aim at matching spatially corresponding tissue intensities. In this study, we present a novel automatic technique, called STI forSTandardization of Intensities, which not only shares the simplicity and robustness of histogram-matching techniques, but also incorporates tissue spatial intensity information. STI uses joint intensity histograms to determine intensity correspondence in each tissue between the input and standard images. We compared STI to an existing histogram-matching technique on two multicentric datasets, Pilot E-ADNI and ADNI, by measuring the intensity error with respect to the standard image after performing nonlinear registration. The Pilot E-ADNI dataset consisted in 3 subjects each scanned in 7 different sites. The ADNI dataset consisted in 795 subjects scanned in more than 50 different sites. STI was superior to the histogram-matching technique, showing significantly better intensity matching for the brain white matter with respect to the standard image.


2018 ◽  
Vol 7 (2) ◽  
pp. 6-11
Author(s):  
Gagandeep Kaur ◽  
Rajeev Kumar Dang

Image processing is a field to process the images according to horizontal and vertical axis to form some useful results. It deals with edge detection, image compression, noise removal, image segmentation, image identification, image retrieval and image variation etc. Customarily, there are two techniques i.e. text based image retrieval and content based image retrieval that are used for retrieving the image according to features and providing color to all pixel pairs. The system retrieval that is based on TBIR assists to recover an image from the database using annotations. CBIR extorts images to form a hefty degree database using the visual contents of an original image that is called low level features or features of an image. These visual features are extracted using feature extraction and then match with the input image. Histogram, color moment, color correlogram, Gabor filter and wavelet transform are various CBIR techniques that can be used autonomously or pooled to acquire enhanced consequences. This paper states about a novel technique for fetching the images from the image database using two low level features namely color based feature and texture based features. Two techniques- one is color correlogram (for color indexing) and another is wavelet transform (for texture processing) has also been introduced.


2017 ◽  
Author(s):  
Elham Shahab ◽  
Hadi Abdolrahimpour

Secret sharing approach and in particular Visual Cryptography (VC) try to address the security issues in dealing with images. In fact, VC is a powerful technique that combines the notions of perfect ciphers and secret sharing in cryptography. VC takes an image (secret) as an input and encrypts (divide) into two or more pieces (shares) that each of them can not reveal any information about the main input. The decryption way in this scenario is done through superimposing shares on top of each other to receive the input image. No computer participation is required, thus showing one of the distinguishing features of VC. It is claimed that VC is a unique technique in the sense that the encrypted message can be decrypted directly by the human visual system.


Color retinal image enhancement plays an important role in improving an image quality suited for reliable diagnosis. For this problem domain, a simple and effective algorithm for image contrast and color balance enhancement namely Ordering Gap Adjustment and Brightness Specification (OGABS) was proposed. The OGABS algorithm first constructs a specified histogram by adjusting the gap of the input image histogram ordering by its probability density function under gap limiter and Hubbard’s dynamic range specifications. Then, the specified histograms are targets to redistribute the intensity values of the input image based on histogram matching. Finally, color balance is improved by specifying the image brightness based on Hubbard’s brightness specification. The OGABS algorithm is implemented by the MATLAB program and the performance of our algorithm has been evaluated against data from STARE and DiaretDB0 datasets. The results obtained show that our algorithm enhances the image contrast and creates a good color balance in a pleasing natural appearance with a standard color of lesions.


Sign in / Sign up

Export Citation Format

Share Document