Development and evaluation of face robot to express various face shape

Author(s):  
K. Hayashi ◽  
Y. Onishi ◽  
K. Itoh ◽  
H. Miwa ◽  
A. Takanishi
Keyword(s):  
2009 ◽  
Vol 29 (10) ◽  
pp. 2710-2712 ◽  
Author(s):  
Li-qiang DU ◽  
Peng JIA ◽  
Zong-tan ZHOU ◽  
De-wen HU

Author(s):  
Giuditta Battistoni ◽  
Diana Cassi ◽  
Marisabel Magnifico ◽  
Giuseppe Pedrazzi ◽  
Marco Di Blasio ◽  
...  

This study investigates the reliability and precision of anthropometric measurements collected from 3D images and acquired under different conditions of head rotation. Various sources of error were examined, and the equivalence between craniofacial data generated from alternative head positions was assessed. 3D captures of a mannequin head were obtained with a stereophotogrammetric system (Face Shape 3D MaxiLine). Image acquisition was performed with no rotations and with various pitch, roll, and yaw angulations. On 3D images, 14 linear distances were measured. Various indices were used to quantify error magnitude, among them the acquisition error, the mean and the maximum intra- and inter-operator measurement error, repeatability and reproducibility error, the standard deviation, and the standard error of errors. Two one-sided tests (TOST) were performed to assess the equivalence between measurements recorded in different head angulations. The maximum intra-operator error was very low (0.336 mm), closely followed by the acquisition error (0.496 mm). The maximum inter-operator error was 0.532 mm, and the highest degree of error was found in reproducibility (0.890 mm). Anthropometric measurements from alternative acquisition conditions resulted in significantly equivalent TOST, with the exception of Zygion (l)–Tragion (l) and Cheek (l)–Tragion (l) distances measured with pitch angulation compared to no rotation position. Face Shape 3D Maxiline has sufficient accuracy for orthodontic and surgical use. Precision was not altered by head orientation, making the acquisition simpler and not constrained to a critical precision as in 2D photographs.


2020 ◽  
Vol 13 (3) ◽  
pp. 365-388
Author(s):  
Asha Sukumaran ◽  
Thomas Brindha

PurposeThe humans are gifted with the potential of recognizing others by their uniqueness, in addition with more other demographic characteristics such as ethnicity (or race), gender and age, respectively. Over the decades, a vast count of researchers had undergone in the field of psychological, biological and cognitive sciences to explore how the human brain characterizes, perceives and memorizes faces. Moreover, certain computational advancements have been developed to accomplish several insights into this issue.Design/methodology/approachThis paper intends to propose a new race detection model using face shape features. The proposed model includes two key phases, namely. (a) feature extraction (b) detection. The feature extraction is the initial stage, where the face color and shape based features get mined. Specifically, maximally stable extremal regions (MSER) and speeded-up robust transform (SURF) are extracted under shape features and dense color feature are extracted as color feature. Since, the extracted features are huge in dimensions; they are alleviated under principle component analysis (PCA) approach, which is the strongest model for solving “curse of dimensionality”. Then, the dimensional reduced features are subjected to deep belief neural network (DBN), where the race gets detected. Further, to make the proposed framework more effective with respect to prediction, the weight of DBN is fine tuned with a new hybrid algorithm referred as lion mutated and updated dragon algorithm (LMUDA), which is the conceptual hybridization of lion algorithm (LA) and dragonfly algorithm (DA).FindingsThe performance of proposed work is compared over other state-of-the-art models in terms of accuracy and error performance. Moreover, LMUDA attains high accuracy at 100th iteration with 90% of training, which is 11.1, 8.8, 5.5 and 3.3% better than the performance when learning percentage (LP) = 50%, 60%, 70%, and 80%, respectively. More particularly, the performance of proposed DBN + LMUDA is 22.2, 12.5 and 33.3% better than the traditional classifiers DCNN, DBN and LDA, respectively.Originality/valueThis paper achieves the objective detecting the human races from the faces. Particularly, MSER feature and SURF features are extracted under shape features and dense color feature are extracted as color feature. As a novelty, to make the race detection more accurate, the weight of DBN is fine tuned with a new hybrid algorithm referred as LMUDA, which is the conceptual hybridization of LA and DA, respectively.


2017 ◽  
Vol 173 (11) ◽  
pp. 2886-2892 ◽  
Author(s):  
Jasmien Roosenboom ◽  
Karlijne Indencleef ◽  
Greet Hens ◽  
Hilde Peeters ◽  
Kaare Christensen ◽  
...  

Author(s):  
Cuican Yu ◽  
Zihui Zhang ◽  
Huibin Li ◽  
Jian Sun ◽  
Zongben Xu

2021 ◽  
Author(s):  
Mohamed Hossam ◽  
Ahmed Ashraf Afify ◽  
Mohamed Rady ◽  
Michael Nabil ◽  
Kareem Moussa ◽  
...  

2016 ◽  
Vol 34 (3) ◽  
pp. 904-908 ◽  
Author(s):  
Esin Ozsahin ◽  
Emine Kizilkanat ◽  
Neslihan Boyan ◽  
Roger Soames ◽  
Ozkan Oguz
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document