skin segmentation
Recently Published Documents


TOTAL DOCUMENTS

149
(FIVE YEARS 35)

H-INDEX

15
(FIVE YEARS 2)

Author(s):  
Harsha B. K.

Abstract: Different colored digital images can be represented in a variety of color spaces. Red-Green-Blue is the most commonly used color space. That can be transformed into Luminance, Blue difference, Red difference. These color pixels' defined features provide strong information about whether they belong to human skin or not. A novel color-based feature extraction method is proposed in this paper, which makes use of both red, green, blue, luminance, hue, and saturation information. The proposed method is used on an image database that contains people of various ages, races, and genders. The obtained features are used to segment the human skin using the Support-Vector- Machine algorithm, and the promising performance results of 89.86% accuracy are then compared to the most commonly used methods in the literature. Keywords: Skin segmentation, SVM, feature extraction, digital images


2021 ◽  
Vol 38 (6) ◽  
pp. 1843-1851
Author(s):  
Ouarda Soltani ◽  
Souad Benabdelkader

The human color skin image database called SFA, specifically designed to assist research in the area of face recognition, constitutes a very important means particularly for the challenging task of skin detection. It has showed high performances comparing to other existing databases. SFA database provides multiple skin and non-skin samples, which in various combinations with each other allow creating new samples that could be useful and more effective. This particular aspect will be investigated, in the present paper, by creating four new representative skin samples according to the four rules of minimum, maximum, mean and median. The obtained samples will be exploited for the purpose of skin segmentation on the basis of the well-known Euclidean and Manhattan distance metrics. Thereafter, performances of the new representative skin samples versus performances of those skin samples, originally provided by SFA, will be illustrated. Simulation results in both SFA and UTD (University of Texas at Dallas) color face databases indicate that detection rates higher than 92% can be achieved with either measure.


2021 ◽  
Author(s):  
Matthieu Scherpf ◽  
Hannes Ernst ◽  
Leo Misera ◽  
Hagen Malberg ◽  
Martin Schmidt

2021 ◽  
Vol 1 (1) ◽  
pp. 71-80
Author(s):  
Febri Damatraseta ◽  
Rani Novariany ◽  
Muhammad Adlan Ridhani

BISINDO is one of Indonesian sign language, which do not have many facilities to implement. Because it can cause deaf people have difficulty to live their daily life. Therefore, this research tries to offer an recognition or translation system of the BISINDO alphabet into a text. The system is expected to help deaf people to communicate in two directions. In this study the problems encountered is small datasets. Therefore this research will do the testing of hand gesture recognition, by comparing two model CNN algorithms, that is LeNet-5 and Alexnet. This test will look for which classification technique is better if the dataset conditions in an amount that does not reach 1000 images in each class. After testing, the results found that the CNN technique on the Alexnet architectural model is better to used, this is because when doing the testing process by using still-image and Alexnet model data which has been released in training process, Alexnet model data gives greater prediction results that is equal to 76%. While the LeNet model is only able to predict with the percentage of 19%. When that Alexnet data model used on the system offered, only able to predict correcly by 60%.   Keywords: Sign language, BISINDO, Computer Vision, Hand Gesture Recognition, Skin Segmentation, CIELab, Deep Learning, CNN.


2021 ◽  
Vol 10 (02) ◽  
pp. 13-21
Author(s):  
M. Ananthi ◽  
Bharathram. P ◽  
Rahul Narayanan. L

Distraction of drivers while driving on roadways results in the death of 1.2 million people, approximately every year around the globe. Even though there are several improvements made in road and vehicle design, the total number of accidents is higher. In 2015, 3,477 people were dead and 3,91,000 were injured during motor vehicle crashes associating distracted drivers. Our paper is aimed to prevent and reduce the rate of motor vehicle accidents that are caused by human errors and distraction during driving. We study the different postures of the driver by means of the hand localization, skin segmentation and facial data. In our paper, we propose a reliable deep-learning CNN model that attains 92% accuracy.


2021 ◽  
pp. 016173462110141
Author(s):  
Felix Q. Jin ◽  
Anna E. Knight ◽  
Adela R. Cardones ◽  
Kathryn R. Nightingale ◽  
Mark L. Palmeri

Correctly calculating skin stiffness with ultrasound shear wave elastography techniques requires an accurate measurement of skin thickness. We developed and compared two algorithms, a thresholding method and a deep learning method, to measure skin thickness on ultrasound images. Here, we also present a framework for weakly annotating an unlabeled dataset in a time-effective manner to train the deep neural network. Segmentation labels for training were proposed using the thresholding method and validated with visual inspection by a human expert reader. We reduced decision ambiguity by only inspecting segmentations at the center A-line. This weak annotation approach facilitated validation of over 1000 segmentation labels in 2 hours. A lightweight deep neural network that segments entire 2D images was designed and trained on this weakly-labeled dataset. Averaged over six folds of cross-validation, segmentation accuracy was 57% for the thresholding method and 78% for the neural network. In particular, the network was better at finding the distal skin margin, which is the primary challenge for skin segmentation. Both algorithms have been made publicly available to aid future applications in skin characterization and elastography.


2021 ◽  
Vol 9 (2) ◽  
pp. 1112-1116
Author(s):  
Manisha. A, Et. al.

Communication is interchanging of ideas information or message from one person to the other person, sign language is made with the hand and other movement, including facial expressions and posture of the body used by the people of unable to speak and hearing, there are different type of sign language. Tamil sign language is the regional sign language, the aim of the work is provide the real time recognition of Tamil sign Language (TSL) in to Tamil letter, here we introduced the convolutional neural network (CNN) as a classifier used as training the Tamil sign language and predict the Tamil sign language, the process are divided in two section one is to train by using the keras model and the second section is the skin segmentation of the hand gestures in the region of the interest, there are two phase in training the model, having set of the hand gestures images in the training set and testing set by using the image train the model, the model occurred the 100% accuracy in the white background and good lighting condition, 97.5 % in the low lighting condition.


Sign in / Sign up

Export Citation Format

Share Document