scholarly journals Automated syndrome diagnosis by three-dimensional facial imaging

2020 ◽  
Vol 22 (10) ◽  
pp. 1682-1693 ◽  
Author(s):  
Benedikt Hallgrímsson ◽  
J. David Aponte ◽  
David C. Katz ◽  
Jordan J. Bannister ◽  
Sheri L. Riccardi ◽  
...  

Abstract Purpose Deep phenotyping is an emerging trend in precision medicine for genetic disease. The shape of the face is affected in 30–40% of known genetic syndromes. Here, we determine whether syndromes can be diagnosed from 3D images of human faces. Methods We analyzed variation in three-dimensional (3D) facial images of 7057 subjects: 3327 with 396 different syndromes, 727 of their relatives, and 3003 unrelated, unaffected subjects. We developed and tested machine learning and parametric approaches to automated syndrome diagnosis using 3D facial images. Results Unrelated, unaffected subjects were correctly classified with 96% accuracy. Considering both syndromic and unrelated, unaffected subjects together, balanced accuracy was 73% and mean sensitivity 49%. Excluding unrelated, unaffected subjects substantially improved both balanced accuracy (78.1%) and sensitivity (56.9%) of syndrome diagnosis. The best predictors of classification accuracy were phenotypic severity and facial distinctiveness of syndromes. Surprisingly, unaffected relatives of syndromic subjects were frequently classified as syndromic, often to the syndrome of their affected relative. Conclusion Deep phenotyping by quantitative 3D facial imaging has considerable potential to facilitate syndrome diagnosis. Furthermore, 3D facial imaging of “unaffected” relatives may identify unrecognized cases or may reveal novel examples of semidominant inheritance.

Author(s):  
Shu-Yen Wan ◽  
◽  
Lun-Jou Lo ◽  
Che-Yao Chang

Superimposition of cranio-maxillofacial images acquired from cone-beam computed tomography (CBCT) and facial images acquired from three-dimensional photography (3D photography) can assist in diagnosis and surgical planning. Conventional approaches individually identified prominent facial landmarks on both modalities, respectively and assessed their correspondence. Considering, however, variation of facial expressions or drastic feature distortion when the face or head was imaged at different timing, landmark registration can become challenging. This paper proposes a disturbance-region removal (DRR) procedure to improve the efficacy of registration. The disturbance regions (DRs) are defined as those exhibiting strong responses in the concavity intensity maps that are computed from the facial surface mesh. Following this identification process for the DRs, an adapted symmetric region growing algorithm is used to form the connected DRs that are to be removed prior to superimposition of both modalities. The results show a twenty-eight percent better match of overall correspondence of the facial fiducial markers. Instead of being the registration guides in conventional approaches, in this study the fiducial markers are employed as only a means to assess the performance of registration


2012 ◽  
Vol 2012 ◽  
pp. 1-8 ◽  
Author(s):  
Rebecca Ort ◽  
Philipp Metzler ◽  
Astrid L. Kruse ◽  
Felix Matthews ◽  
Wolfgang Zemann ◽  
...  

Ample data exists about the high precision of three-dimensional (3D) scanning devices and their data acquisition of the facial surface. However, a question remains regarding which facial landmarks are reliable if identified in 3D images taken under clinical circumstances. Sources of error to be addressed could be technical, user dependent, or patient respectively anatomy related. Based on clinical 3D photos taken with the 3dMDface system, the intra observer repeatability of 27 facial landmarks in six cleft lip (CL) infants and one non-CL infant was evaluated based on a total of over 1,100 measurements. Data acquisition was sometimes challenging but successful in all patients. The mean error was 0.86 mm, with a range of 0.39 mm (Exocanthion) to 2.21 mm (soft gonion). Typically, landmarks provided a small mean error but still showed quite a high variance in measurements, for example, exocanthion from 0.04 mm to 0.93 mm. Vice versa, relatively imprecise landmarks still provide accurate data regarding specific spatial planes. One must be aware of the fact that the degree of precision is dependent on landmarks and spatial planes in question. In clinical investigations, the degree of reliability for landmarks evaluated should be taken into account. Additional reliability can be achieved via multiple measuring.


Author(s):  
R. L. Palmer ◽  
P. Helmholz ◽  
G. Baynam

Abstract. Facial appearance has long been understood to offer insight into a person’s health. To an experienced clinician, atypical facial features may signify the presence of an underlying rare or genetic disease. Clinicians use their knowledge of how disease affects facial appearance along with the patient’s physiological and behavioural traits, and their medical history, to determine a diagnosis. Specialist expertise and experience is needed to make a dysmorphological facial analysis. Key to this is accurately assessing how a face is significantly different in shape and/or growth compared to expected norms. Modern photogrammetric systems can acquire detailed 3D images of the face which can be used to conduct a facial analysis in software with greater precision than can be obtained in person. Measurements from 3D facial images are already used as an alternative to direct measurement using instruments such as tape measures, rulers, or callipers. However, the ability to take accurate measurements – whether virtual or not – presupposes the assessor’s facility to accurately place the endpoints of the measuring tool at the positions of standardised anatomical facial landmarks. In this paper, we formally introduce Cliniface – a free and open source application that uses a recently published highly precise method of detecting facial landmarks from 3D facial images by non-rigidly transforming an anthropometric mask (AM) to the target face. Inter-landmark measurements are then used to automatically identify facial traits that may be of clinical significance. Herein, we show how non-experts with minimal guidance can use Cliniface to extract facial anthropometrics from a 3D facial image at a level of accuracy comparable to an expert. We further show that Cliniface itself is able to extract the same measurements at a similar level of accuracy – completely automatically.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Jialei Ma ◽  
Xiansheng Li ◽  
Yuanyuan Ren ◽  
Ran Yang ◽  
Qichao Zhao

Human face recognition has been widely used in many fields, including biorobots, driver fatigue monitoring, and polygraph tests. However, the end-to-end models fit by most of the existing algorithms perform poorly in interpretation because complex classifiers are constructed using facial images directly. In addition, in some of the models, dynamic characteristics of subjects as individuals are not fully considered, so dynamic information is not extracted. In order to solve these problems, this paper proposes an action unit intensity prediction model. The three-dimensional coordinates of 68 landmarks of human faces are obtained based on the convolutional experts constrained local model (CE-CLM), which enables the construction of dynamic facial features. Based on the error analysis of the CE-CLM algorithm, dimension reduction of the constructed features is performed by the principal components analysis (PCA). The radial basis function (RBF) neural network is also constructed to train the action unit prediction models. The proposed method is verified by the experiments, and the overall mean square error (MSE) of the proposed method is 0.01826. Lastly, the network construction process is optimized, so that for the same training samples, the models are fitted using fewer iterations. The number of iterations is decreased by 27 on average. In summary, this paper provides a method to rapidly construct action unit (AU) intensity prediction models and constructs automatic AU intensity estimation models for facial images.


2017 ◽  
Vol 5 (3) ◽  
pp. 123-134
Author(s):  
Haripriya K ◽  
Ramya Lakshmi V. ◽  
Rajeswari S ◽  
Rama T ◽  
Vinothini K.R

Nowadays Image Processing has become a proficient domain due to the prolific techniques like face detection and face recognition. They play an important role in our society due to their use in wide range of applications such as surveillance, security, banking, and multimedia. One of major challenges faced in this technique of face recognition is difficulty in handling arbitrary pose variations in three dimensional representations. In video retrieval system, many approaches have been developed for recognition across pose variations and to assume the face poses to be known. These constraints made it semi-automatic. In this paper we propose a fully automatic method for multi-view face recognition of improving the accuracy or efficiency using local binary patterns. It uses tree-based data structure to create sub-grids. In this system we use KLT algorithm to detect and extract features automatically by using Eigen vectors and estimation of hessian value.


1997 ◽  
Vol 9 (5) ◽  
pp. 611-623 ◽  
Author(s):  
Frederick K. D. Nahm ◽  
Amelie Perret ◽  
David G. Amaral ◽  
Thomas D. Albright

Facial displays are an important form of social communication in nonhuman primates. Clues to the information conveyed by faces are the temporal and spatial characteristics of ocular viewing patterns to facial images. The present study compares viewing patterns of four rhesus monkeys (Macaca mulatta) to a set of 1- and 3-sec video segments of conspecific facial displays, which included open-mouth threat, lip-smack, yawn, fear-grimace, and neutral profile. Both static and dynamic video images were used. Static human faces displaying open-mouth threat, smile, and neutral gestures were also presented. Eye position was recorded with a surgically implanted eye-coil. The relative perceptual salience of the eyes, the midface, and the mouth across different expressive gestures was determined by analyzing the number of eye movements associated with each feature during static and dynamic presentations. The results indicate that motion does not significantly affect the viewing patterns to expressive facial displays, and when given a choice, monkeys spend a relatively large amount of time inspecting the face, especially the eyes, as opposed to areas surrounding the face. The expressive nature of the facial display also affected viewing patterns in that threatening and fear-related displays evoked a pattern of viewing that differed from that recorded during the presentation of submissive-related facial displays. From these results we conclude that (1) the most important determinant of the visual inspection patterns of faces is the constellation of physiognomic features and their configuration, but not facial motion, (2) the eyes are generally the most salient facial feature, and (3) the agonistic or affiliative dimension of an expressive facial display can be delineated on the basis of viewing patterns.


This paper presents a methodology for the computer synthesis of realistic faces capable of expressive articulations. A sophisticated three-dimensional model of the human face is developed that incorporates a physical model of facial tissue with an anatomical model of facial muscles. The tissue and muscle models are generic, in that their structures are independent of specific facial geometries. To synthesize specific faces, these models are automatically mapped onto geometrically accurate polygonal facial representations constructed by photogrammetry of stereo facial images or by non-uniform meshing of detailed facial topographies acquired by using range sensors. The methodology offers superior realism by utilizing physical modelling to emulate complex tissue deformations in response to coordinated facial muscle activity. To provide realistic muscle actions to the face model, a performance driven animation technique is developed which estimates the dynamic contractions of a performer’s facial muscles from video imagery.


Author(s):  
John C. Russ

Three-dimensional (3D) images consisting of arrays of voxels can now be routinely obtained from several different types of microscopes. These include both the transmission and emission modes of the confocal scanning laser microscope (but not its most common reflection mode), the secondary ion mass spectrometer, and computed tomography using electrons, X-rays or other signals. Compared to the traditional use of serial sectioning (which includes sequential polishing of hard materials), these newer techniques eliminate difficulties of alignment of slices, and maintain uniform resolution in the depth direction. However, the resolution in the z-direction may be different from that within each image plane, which makes the voxels non-cubic and creates some difficulties for subsequent analysis.


2009 ◽  
Vol 8 (3) ◽  
pp. 887-897
Author(s):  
Vishal Paika ◽  
Er. Pankaj Bhambri

The face is the feature which distinguishes a person. Facial appearance is vital for human recognition. It has certain features like forehead, skin, eyes, ears, nose, cheeks, mouth, lip, teeth etc which helps us, humans, to recognize a particular face from millions of faces even after a large span of time and despite large changes in their appearance due to ageing, expression, viewing conditions and distractions such as disfigurement of face, scars, beard or hair style. A face is not merely a set of facial features but is rather but is rather something meaningful in its form.In this paper, depending on the various facial features, a system is designed to recognize them. To reveal the outline of the face, eyes, ears, nose, teeth etc different edge detection techniques have been used. These features are extracted in the term of distance between important feature points. The feature set obtained is then normalized and are feed to artificial neural networks so as to train them for reorganization of facial images.


2004 ◽  
Vol 126 (5) ◽  
pp. 861-870 ◽  
Author(s):  
A. Thakur ◽  
X. Liu ◽  
J. S. Marshall

An experimental and computational study is performed of the wake flow behind a single yawed cylinder and a pair of parallel yawed cylinders placed in tandem. The experiments are performed for a yawed cylinder and a pair of yawed cylinders towed in a tank. Laser-induced fluorescence is used for flow visualization and particle-image velocimetry is used for quantitative velocity and vorticity measurement. Computations are performed using a second-order accurate block-structured finite-volume method with periodic boundary conditions along the cylinder axis. Results are applied to assess the applicability of a quasi-two-dimensional approximation, which assumes that the flow field is the same for any slice of the flow over the cylinder cross section. For a single cylinder, it is found that the cylinder wake vortices approach a quasi-two-dimensional state away from the cylinder upstream end for all cases examined (in which the cylinder yaw angle covers the range 0⩽ϕ⩽60°). Within the upstream region, the vortex orientation is found to be influenced by the tank side-wall boundary condition relative to the cylinder. For the case of two parallel yawed cylinders, vortices shed from the upstream cylinder are found to remain nearly quasi-two-dimensional as they are advected back and reach within about a cylinder diameter from the face of the downstream cylinder. As the vortices advect closer to the cylinder, the vortex cores become highly deformed and wrap around the downstream cylinder face. Three-dimensional perturbations of the upstream vortices are amplified as the vortices impact upon the downstream cylinder, such that during the final stages of vortex impact the quasi-two-dimensional nature of the flow breaks down and the vorticity field for the impacting vortices acquire significant three-dimensional perturbations. Quasi-two-dimensional and fully three-dimensional computational results are compared to assess the accuracy of the quasi-two-dimensional approximation in prediction of drag and lift coefficients of the cylinders.


Sign in / Sign up

Export Citation Format

Share Document