Computer vision vs human perception: Novel preprocessing technique to reduce inter-character similaity of Bangla alphabet

Author(s):  
T. M. Chowdhury ◽  
M. A. Naser ◽  
Faisal Ahmed
2011 ◽  
Vol 2 (2) ◽  
pp. 1
Author(s):  
Luciana Nedel ◽  
Anderson Maciel ◽  
Carla Dal Sasso Freitas ◽  
Claudio Jung ◽  
Manuel Oliveira ◽  
...  

The Computer Graphics, Image Processing and Interaction (CGIP) group at UFRGS concentrates expertise from many different and complementary graphics related domains. In this paper we introduce the group and present our re- search lines and some ongoing projects. We selected mainly the projects related to 3D interaction and navigation, which includes applications as massive data visualization, surgery planning and simulation, tracking and computer vision algorithms, and modeling approaches for human perception and natural world.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0255109
Author(s):  
Mitchell J. P. Van Zuijlen ◽  
Hubert Lin ◽  
Kavita Bala ◽  
Sylvia C. Pont ◽  
Maarten W. A. Wijntjes

In this paper, we capture and explore the painterly depictions of materials to enable the study of depiction and perception of materials through the artists’ eye. We annotated a dataset of 19k paintings with 200k+ bounding boxes from which polygon segments were automatically extracted. Each bounding box was assigned a coarse material label (e.g., fabric) and half was also assigned a fine-grained label (e.g., velvety, silky). The dataset in its entirety is available for browsing and downloading at materialsinpaintings.tudelft.nl. We demonstrate the cross-disciplinary utility of our dataset by presenting novel findings across human perception, art history and, computer vision. Our experiments include a demonstration of how painters create convincing depictions using a stylized approach. We further provide an analysis of the spatial and probabilistic distributions of materials depicted in paintings, in which we for example show that strong patterns exists for material presence and location. Furthermore, we demonstrate how paintings could be used to build more robust computer vision classifiers by learning a more perceptually relevant feature representation. Additionally, we demonstrate that training classifiers on paintings could be used to uncover hidden perceptual cues by visualizing the features used by the classifiers. We conclude that our dataset of painterly material depictions is a rich source for gaining insights into the depiction and perception of materials across multiple disciplines and hope that the release of this dataset will drive multidisciplinary research.


2011 ◽  
Vol 82 (3) ◽  
pp. 299-309 ◽  
Author(s):  
Javier Silvestre-Blanes ◽  
Joaquin Berenguer-Sebastiá ◽  
Rubén Pérez-Lloréns ◽  
Ignacio Miralles ◽  
Jorge Moreno

The measurement and evaluation of the appearance of wrinkling in textile products after domestic washing and drying is performed currently by the comparison of the fabric with the replicas. This kind of evaluation has certain drawbacks, the most significant of which are its subjectivity and its limitations when used with garments. In this paper, we present an automated wrinkling evaluation system. The system developed can process fabrics as well as any type of garment, independent of size or pattern on the material. The system allows us to label different parts of the garment. Thus, as different garment parts have different influence on human perception, this labeling enables the use of weighting, to improve the correlation with the human visual system. The system has been tested with different garments showing good performance and correlation with human perception.


2019 ◽  
Vol 9 (21) ◽  
pp. 4542 ◽  
Author(s):  
Marco Leo ◽  
Pierluigi Carcagnì ◽  
Cosimo Distante ◽  
Pier Luigi Mazzeo ◽  
Paolo Spagnolo ◽  
...  

The computational analysis of facial expressions is an emerging research topic that could overcome the limitations of human perception and get quick and objective outcomes in the assessment of neurodevelopmental disorders (e.g., Autism Spectrum Disorders, ASD). Unfortunately, there have been only a few attempts to quantify facial expression production and most of the scientific literature aims at the easier task of recognizing if either a facial expression is present or not. Some attempts to face this challenging task exist but they do not provide a comprehensive study based on the comparison between human and automatic outcomes in quantifying children’s ability to produce basic emotions. Furthermore, these works do not exploit the latest solutions in computer vision and machine learning. Finally, they generally focus only on a homogeneous (in terms of cognitive capabilities) group of individuals. To fill this gap, in this paper some advanced computer vision and machine learning strategies are integrated into a framework aimed to computationally analyze how both ASD and typically developing children produce facial expressions. The framework locates and tracks a number of landmarks (virtual electromyography sensors) with the aim of monitoring facial muscle movements involved in facial expression production. The output of these virtual sensors is then fused to model the individual ability to produce facial expressions. Gathered computational outcomes have been correlated with the evaluation provided by psychologists and evidence has been given that shows how the proposed framework could be effectively exploited to deeply analyze the emotional competence of ASD children to produce facial expressions.


2014 ◽  
Vol 989-994 ◽  
pp. 4123-4126 ◽  
Author(s):  
Ching Hung Su ◽  
Huang Sen Chiu ◽  
Jui Hung Hung ◽  
Tsai Ming Hsieh

The visual attributes of color are suitable for human perception and computer vision. A Color space is defined as a model for representing the intensity value of color. We propose a color space comparison and analysis between RGB and HSV based images retrieval. We succeed in transferring the image retrieval problem to sequences comparison and subsequently using the color sequences comparison between the color featurs of RGB and HSV to compare and analyze the images of database.


2018 ◽  
Vol 7 (4.38) ◽  
pp. 1187
Author(s):  
Vladimir Mokshin ◽  
Ildar Sayfudinov ◽  
Svetlana Yudina ◽  
Leonid Sharnin

The approach to image segmentation is reviewed in the article. The method of highlighting significant contours in the image is reviewed. Some structures in the image attract attention more than others due to certain distinctive properties. The article reviews the approach of highlighting significant structures in the image representing the areas of candidates identifying the object in the video frame for mobile platforms. For example, such shapes can be smoother, longer and closed. Such structures are called significant. It would be expedient to use only these significant structures to increase the speed of image recognition by computer vision methods focused on the contour selection. This approach allocates the computing resources only to significant structures, thus reducing the total computation time. Since the image consists of many pixels and links between them, which are called edges, significant structures can be measured. The article presents an approach to measuring the structure significance that largely coincides with human perception. Some image structures attract our attention without the need for a systematic scan of the entire image. In most cases, this significance represents the structure properties as a whole, i.e. parts of the structure cannot be isolated. This article presents a measure of significance based on the measurement of length and curvature. The measure highlights structures characteristic of human perception, and they often correspond to objects of interest in the image. A method is presented for calculating significance using an iterative scheme combined into a single local network for processing elements. The optimization approach to represent a processed image highlighting significant locations is used in the network.  


1993 ◽  
Author(s):  
Susan M. Astley ◽  
I. Hutt ◽  
S. Adamson ◽  
Peter Miller ◽  
P. Rose ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document