contrast normalization
Recently Published Documents


TOTAL DOCUMENTS

57
(FIVE YEARS 11)

H-INDEX

15
(FIVE YEARS 1)

2021 ◽  
Author(s):  
A.E. Alyokhina ◽  
D.S. Rusin ◽  
E.V. Dmitriev ◽  
A.N. Safonova

With the advent of space equipment that allows obtaining panchromatic images of ultra-high spatial resolution (< 1 m) there was a tendency to develop methods of thematic processing of aerospace images in the direction of joint use of textural and spectral features of the objects under study. In this paper, we consider the problem of classification of forest canopy structures based on textural analysis of multispectral and panchromatic images of Worldview-2. Traditionally, a statistical approach is used to solve this problem, based on the construction of distributions of the common occurrence of gray gradations and the calculation of statistical moments that have significant regression relationships with the structural parameters of stands. An alternative approach to solving the problem of extracting texture features is based on frequency analysis of images. To date, one of the most promising methods of this kind is based on wavelet scattering. In comparison with the traditionally applied approaches based on the Fourier transform, in addition to the characteristic signal frequencies, the wavelet analysis allows us to identify characteristic spatial scales, which is fundamentally important for the textural analysis of spatially inhomogeneous images. This paper uses a more general approach to solving the problem of texture segmentation using the convolutional neural network U-net. This architecture is a sequence of convolution-pooling layers. At the first stage, the sampling of the original image is lowered and the content is captured. At the second stage, the exact localization of the recognized classes is carried out, while the discretization is increased to the original one. The RMSProp optimizer was used to train the network. At the preprocessing stage, the contrast of fragments is increased using the global contrast normalization algorithm. Numerical experiments using expert information have shown that the proposed method allows segmenting the structural classes of the forest canopy with high accuracy.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christopher DiMattina ◽  
Curtis L. Baker

AbstractSegmenting scenes into distinct surfaces is a basic visual perception task, and luminance differences between adjacent surfaces often provide an important segmentation cue. However, mean luminance differences between two surfaces may exist without any sharp change in albedo at their boundary, but rather from differences in the proportion of small light and dark areas within each surface, e.g. texture elements, which we refer to as a luminance texture boundary. Here we investigate the performance of human observers segmenting luminance texture boundaries. We demonstrate that a simple model involving a single stage of filtering cannot explain observer performance, unless it incorporates contrast normalization. Performing additional experiments in which observers segment luminance texture boundaries while ignoring super-imposed luminance step boundaries, we demonstrate that the one-stage model, even with contrast normalization, cannot explain performance. We then present a Filter–Rectify–Filter model positing two cascaded stages of filtering, which fits our data well, and explains observers' ability to segment luminance texture boundary stimuli in the presence of interfering luminance step boundaries. We propose that such computations may be useful for boundary segmentation in natural scenes, where shadows often give rise to luminance step edges which do not correspond to surface boundaries.


2020 ◽  
Vol 70 (2) ◽  
pp. 229-233
Author(s):  
K.S. Imanbaev ◽  
◽  
Zh.Zh. Kozhamkulova ◽  
Zh.T. Aituganova ◽  
M.M. Sydykova ◽  
...  

In this paper, we consider solutions to the problems of fingerprint recognition to improve the image algorithm that reduces Gaussian noise (known as white noise"), methods for adjusting the contrast normalization intensity and data used (parameters), restoring areas of high noise on the images of fingerprints. In addition, this process is one of the most important part of the process, which involves the cleavage of the cleavage of the palms - the processing of the image. By consistency, the noise obtained, coupled with the overlap of the cleavage, the multiple methods of drawing the cleavage of the cleavage, the multiple disaggregated methods of gaining the cleavage of the cleavage, and the workmanship of the instrument, Functions, combined with file format overlays, move from one-to-one key folding functions to the overlay palms.


2020 ◽  
Author(s):  
Christopher DiMattina ◽  
Curtis L. Baker

ABSTRACTSegmenting scenes into distinct surfaces is a basic visual perception task, and luminance differences between adjacent surfaces often provide an important segmentation cue. However, mean luminance differences between two surfaces may exist without any sharp change in albedo at their boundary, but rather from differences in the proportion of small light and dark areas within each surface, e.g. texture elements, which we refer to as a luminance texture boundary. Here we investigate the performance of human observers segmenting luminance texture boundaries. We demonstrate that a simple model involving a single stage of filtering cannot explain observer performance, unless it incorporates contrast normalization. Performing additional experiments in which observers segment luminance texture boundaries while ignoring super-imposed luminance step boundaries, we demonstrate that the one-stage model, even with contrast normalization, cannot explain performance. We then present a Filter-Rectify-Filter (FRF) model positing two cascaded stages of filtering, which fits our data well, and explains observers’ ability to segment luminance texture boundary stimuli in the presence of interfering luminance step boundaries. We propose that such computations may be useful for boundary segmentation in natural scenes, where shadows often give rise to luminance step edges which do not correspond to surface boundaries.


2020 ◽  
Vol 194 ◽  
pp. 102947
Author(s):  
Mahdi Rad ◽  
Peter M. Roth ◽  
Vincent Lepetit

Author(s):  
Tejas Rana

Various experiments or methods can be used for face recognition and detection however two of the main contain an experiment that evaluates the impact of facial landmark localization in the face recognition performance and the second experiment evaluates the impact of extracting the HOG from a regular grid and at multiple scales. We observe the question of feature sets for robust visual object recognition. The Histogram of Oriented Gradients outperform other existing methods like edge and gradient based descriptors. We observe the influence of each stage of the computation on performance, concluding that fine-scale gradients, relatively coarse spatial binning, fine orientation binning and high- quality local contrast normalization in overlapping descriptor patches are all important for good results. Comparative experiments show that though HOG is simple feature descriptor, the proposed HOG feature achieves good results with much lower computational time.


Author(s):  
Asha K. ◽  
Krishnappa H.K

Purpose: The main purpose of the proposed research work is to perform the segmentation of characters from the handwritten Kannada document. The reason behind segmentation is to support the implementation of handwriting recognition system for Kannada language. Methodology: To perform segmentation of characters, input document has to go through gray scale conversion, denoising, contrast normalization and binarization process. Result: Documents collected from ICDAR-2013 and ICDAR-2015 were considered for experiment and obtained 100% accuracy for line segmentation and 96% accuracy for character segmentation. Conclusion:: To further improve the efficiency with respect to accuracy of character segmentation, other pre-processing steps like skew detection and correction shall be considered.


Cells ◽  
2019 ◽  
Vol 8 (10) ◽  
pp. 1299 ◽  
Author(s):  
Valls-Lacalle ◽  
Negre-Pujol ◽  
Rodríguez ◽  
Varona ◽  
Valera-Cañellas ◽  
...  

Abstract: Connexin 43 (Cx43) is essential for cardiac electrical coupling, but its effects on myocardial fibrosis is controversial. Here, we analyzed the role of Cx43 in myocardial fibrosis caused by angiotensin II (AngII) using Cx43fl/fl and Cx43Cre-ER(T)/fl inducible knock-out (Cx43 content: 50%) mice treated with vehicle or 4-hydroxytamoxifen (4-OHT) to induce a Cre-ER(T)-mediated global deletion of the Cx43 floxed allele. Myocardial collagen content was enhanced by AngII in all groups (n = 8–10/group, p < 0.05). However, animals with partial Cx43 deficiency (vehicle-treated Cx43Cre-ER(T)/fl) had a significantly higher AngII-induced collagen accumulation that reverted when treated with 4-OHT, which abolished Cx43 expression. The exaggerated fibrotic response to AngII in partially deficient Cx43Cre-ER(T)/fl mice was associated with enhanced p38 MAPK activation and was not evident in Cx43 heterozygous (Cx43+/-) mice. In contrast, normalization of interstitial collagen in 4-OHT-treated Cx43Cre-ER(T)/fl animals correlated with enhanced MMP-9 activity, IL-6 and NOX2 mRNA expression, and macrophage content, and with reduced -SMA and SM22 in isolated fibroblasts. In conclusion, our data demonstrates an exaggerated, p38 MAPK-dependent, fibrotic response to AngII in partially deficient Cx43Cre-ER(T)/fl mice, and a paradoxical normalization of collagen deposition in animals with an almost complete Cx43 ablation, an effect associated with increased MMP-9 activity and inflammatory response and reduced fibroblasts differentiation.


Sign in / Sign up

Export Citation Format

Share Document