A 3D sunken-relief generation method of human faces from depth images of feature lines

Author(s):  
Yajie Xu ◽  
Lu Wang ◽  
Yanning Xu ◽  
Xiangxu Meng
2018 ◽  
Vol 4 (10) ◽  
pp. 6
Author(s):  
Khemchandra Patel ◽  
Dr. Kamlesh Namdev

Age changes cause major variations in the appearance of human faces. Due to many lifestyle factors, it is difficult to precisely predict how individuals may look with advancing years or how they looked with "retreating" years. This paper is a review of age variation methods and techniques, which is useful to capture wanted fugitives, finding missing children, updating employee databases, enhance powerful visual effect in film, television, gaming field. Currently there are many different methods available for age variation. Each has their own advantages and purpose. Because of its real life applications, researchers have shown great interest in automatic facial age estimation. In this paper, different age variation methods with their prospects are reviewed. This paper highlights latest methodologies and feature extraction methods used by researchers to estimate age. Different types of classifiers used in this domain have also been discussed.


2018 ◽  
Author(s):  
Karel Kleisner ◽  
Šimon Pokorný ◽  
Selahattin Adil Saribay

In present research, we took advantage of geometric morphometrics to propose a data-driven method for estimating the individual degree of facial typicality/distinctiveness for cross-cultural (and other cross-group) comparisons. Looking like a stranger in one’s home culture may be somewhat stressful. The same facial appearance, however, might become advantageous within an outgroup population. To address this fit between facial appearance and cultural setting, we propose a simple measure of distinctiveness/typicality based on position of an individual along the axis connecting the facial averages of two populations under comparison. The more distant a face is from its ingroup population mean towards the outgroup mean the more distinct it is (vis-à-vis the ingroup) and the more it resembles the outgroup standards. We compared this new measure with an alternative measure based on distance from outgroup mean. The new measure showed stronger association with rated facial distinctiveness than distance from outgroup mean. Subsequently, we manipulated facial stimuli to reflect different levels of ingroup-outgroup distinctiveness and tested them in one of the target cultures. Perceivers were able to successfully distinguish outgroup from ingroup faces in a two-alternative forced-choice task. There was also some evidence that this task was harder when the two faces were closer along the axis connecting the facial averages from the two cultures. Future directions and potential applications of our proposed approach are discussed.


1995 ◽  
Author(s):  
Jie Yang ◽  
Alex Waibel
Keyword(s):  

Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


Sign in / Sign up

Export Citation Format

Share Document