Using facial images for the diagnosis of genetic syndromes: A survey

Author(s):  
Marwa Chendeb EL Rai ◽  
Naoufel Werghi ◽  
Hassan Al Muhairi ◽  
Habiba Alsafar
Author(s):  
Manuel Günther ◽  
Stefan Böhringer ◽  
Dagmar Wieczorek ◽  
Rolf P. Würtz

Graphs labeled with complex-valued Gabor jets are one of the important data formats for face recognition and the classification of facial images into medically relevant classes like genetic syndromes. We here present an interpolation rule and an iterative algorithm for the reconstruction of images from these graphs. This is especially important if graphs have been manipulated for information processing. One such manipulation is averaging the graphs of a single syndrome, another one building a composite face from the features of various individuals. In reconstructions of averaged graphs of genetic syndromes, the patients' identities are suppressed, while the properties of the syndromes are emphasized. These reconstructions from average graphs have a much better quality than averaged images.


2020 ◽  
Vol 22 (10) ◽  
pp. 1682-1693 ◽  
Author(s):  
Benedikt Hallgrímsson ◽  
J. David Aponte ◽  
David C. Katz ◽  
Jordan J. Bannister ◽  
Sheri L. Riccardi ◽  
...  

Abstract Purpose Deep phenotyping is an emerging trend in precision medicine for genetic disease. The shape of the face is affected in 30–40% of known genetic syndromes. Here, we determine whether syndromes can be diagnosed from 3D images of human faces. Methods We analyzed variation in three-dimensional (3D) facial images of 7057 subjects: 3327 with 396 different syndromes, 727 of their relatives, and 3003 unrelated, unaffected subjects. We developed and tested machine learning and parametric approaches to automated syndrome diagnosis using 3D facial images. Results Unrelated, unaffected subjects were correctly classified with 96% accuracy. Considering both syndromic and unrelated, unaffected subjects together, balanced accuracy was 73% and mean sensitivity 49%. Excluding unrelated, unaffected subjects substantially improved both balanced accuracy (78.1%) and sensitivity (56.9%) of syndrome diagnosis. The best predictors of classification accuracy were phenotypic severity and facial distinctiveness of syndromes. Surprisingly, unaffected relatives of syndromic subjects were frequently classified as syndromic, often to the syndrome of their affected relative. Conclusion Deep phenotyping by quantitative 3D facial imaging has considerable potential to facilitate syndrome diagnosis. Furthermore, 3D facial imaging of “unaffected” relatives may identify unrecognized cases or may reveal novel examples of semidominant inheritance.


10.2196/19263 ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. e19263
Author(s):  
Jean Tori Pantel ◽  
Nurulhuda Hajjir ◽  
Magdalena Danyel ◽  
Jonas Elsner ◽  
Angela Teresa Abad-Perez ◽  
...  

Background Collectively, an estimated 5% of the population have a genetic disease. Many of them feature characteristics that can be detected by facial phenotyping. Face2Gene CLINIC is an online app for facial phenotyping of patients with genetic syndromes. DeepGestalt, the neural network driving Face2Gene, automatically prioritizes syndrome suggestions based on ordinary patient photographs, potentially improving the diagnostic process. Hitherto, studies on DeepGestalt’s quality highlighted its sensitivity in syndromic patients. However, determining the accuracy of a diagnostic methodology also requires testing of negative controls. Objective The aim of this study was to evaluate DeepGestalt's accuracy with photos of individuals with and without a genetic syndrome. Moreover, we aimed to propose a machine learning–based framework for the automated differentiation of DeepGestalt’s output on such images. Methods Frontal facial images of individuals with a diagnosis of a genetic syndrome (established clinically or molecularly) from a convenience sample were reanalyzed. Each photo was matched by age, sex, and ethnicity to a picture featuring an individual without a genetic syndrome. Absence of a facial gestalt suggestive of a genetic syndrome was determined by physicians working in medical genetics. Photos were selected from online reports or were taken by us for the purpose of this study. Facial phenotype was analyzed by DeepGestalt version 19.1.7, accessed via Face2Gene CLINIC. Furthermore, we designed linear support vector machines (SVMs) using Python 3.7 to automatically differentiate between the 2 classes of photographs based on DeepGestalt's result lists. Results We included photos of 323 patients diagnosed with 17 different genetic syndromes and matched those with an equal number of facial images without a genetic syndrome, analyzing a total of 646 pictures. We confirm DeepGestalt’s high sensitivity (top 10 sensitivity: 295/323, 91%). DeepGestalt’s syndrome suggestions in individuals without a craniofacially dysmorphic syndrome followed a nonrandom distribution. A total of 17 syndromes appeared in the top 30 suggestions of more than 50% of nondysmorphic images. DeepGestalt’s top scores differed between the syndromic and control images (area under the receiver operating characteristic [AUROC] curve 0.72, 95% CI 0.68-0.76; P<.001). A linear SVM running on DeepGestalt’s result vectors showed stronger differences (AUROC 0.89, 95% CI 0.87-0.92; P<.001). Conclusions DeepGestalt fairly separates images of individuals with and without a genetic syndrome. This separation can be significantly improved by SVMs running on top of DeepGestalt, thus supporting the diagnostic process of patients with a genetic syndrome. Our findings facilitate the critical interpretation of DeepGestalt’s results and may help enhance it and similar computer-aided facial phenotyping tools.


2020 ◽  
Author(s):  
Jean Tori Pantel ◽  
Nurulhuda Hajjir ◽  
Magdalena Danyel ◽  
Jonas Elsner ◽  
Angela Teresa Abad-Perez ◽  
...  

BACKGROUND Collectively, an estimated 5% of the population have a genetic disease. Many of them feature characteristics that can be detected by facial phenotyping. Face2Gene CLINIC is an online app for facial phenotyping of patients with genetic syndromes. DeepGestalt, the neural network driving Face2Gene, automatically prioritizes syndrome suggestions based on ordinary patient photographs, potentially improving the diagnostic process. Hitherto, studies on DeepGestalt’s quality highlighted its sensitivity in syndromic patients. However, determining the accuracy of a diagnostic methodology also requires testing of negative controls. OBJECTIVE The aim of this study was to evaluate DeepGestalt's accuracy with photos of individuals with and without a genetic syndrome. Moreover, we aimed to propose a machine learning–based framework for the automated differentiation of DeepGestalt’s output on such images. METHODS Frontal facial images of individuals with a diagnosis of a genetic syndrome (established clinically or molecularly) from a convenience sample were reanalyzed. Each photo was matched by age, sex, and ethnicity to a picture featuring an individual without a genetic syndrome. Absence of a facial gestalt suggestive of a genetic syndrome was determined by physicians working in medical genetics. Photos were selected from online reports or were taken by us for the purpose of this study. Facial phenotype was analyzed by DeepGestalt version 19.1.7, accessed via Face2Gene CLINIC. Furthermore, we designed linear support vector machines (SVMs) using Python 3.7 to automatically differentiate between the 2 classes of photographs based on DeepGestalt's result lists. RESULTS We included photos of 323 patients diagnosed with 17 different genetic syndromes and matched those with an equal number of facial images without a genetic syndrome, analyzing a total of 646 pictures. We confirm DeepGestalt’s high sensitivity (top 10 sensitivity: 295/323, 91%). DeepGestalt’s syndrome suggestions in individuals without a craniofacially dysmorphic syndrome followed a nonrandom distribution. A total of 17 syndromes appeared in the top 30 suggestions of more than 50% of nondysmorphic images. DeepGestalt’s top scores differed between the syndromic and control images (area under the receiver operating characteristic [AUROC] curve 0.72, 95% CI 0.68-0.76; <i>P</i>&lt;.001). A linear SVM running on DeepGestalt’s result vectors showed stronger differences (AUROC 0.89, 95% CI 0.87-0.92; <i>P</i>&lt;.001). CONCLUSIONS DeepGestalt fairly separates images of individuals with and without a genetic syndrome. This separation can be significantly improved by SVMs running on top of DeepGestalt, thus supporting the diagnostic process of patients with a genetic syndrome. Our findings facilitate the critical interpretation of DeepGestalt’s results and may help enhance it and similar computer-aided facial phenotyping tools.


2009 ◽  
Vol 40 (1) ◽  
pp. 28-29
Author(s):  
BETSY BATES
Keyword(s):  

2001 ◽  
Vol 60 (3) ◽  
pp. 161-178 ◽  
Author(s):  
Jean A. Rondal

Predominantly non-etiological conceptions have dominated the field of mental retardation (MR) since the discovery of the genetic etiology of Down syndrome (DS) in the sixties. However, contemporary approaches are becoming more etiologically oriented. Important differences across MR syndromes of genetic origin are being documented, particularly in the cognition and language domains, differences not explicable in terms of psychometric level, motivation, or other dimensions. This paper highlights the major difficulties observed in the oral language development of individuals with genetic syndromes of mental retardation. The extent of inter- and within-syndrome variability are evaluated. Possible brain underpinnings of the behavioural differences are envisaged. Cases of atypically favourable language development in MR individuals are also summarized and explanatory variables discussed. It is suggested that differences in brain architectures, originating in neurological development and having genetic origins, may largely explain the syndromic as well as the individual within-syndrome variability documented. Lastly, the major implications of the above points for current debates about modularity and developmental connectionism are spelt out.


2009 ◽  
Vol 8 (3) ◽  
pp. 887-897
Author(s):  
Vishal Paika ◽  
Er. Pankaj Bhambri

The face is the feature which distinguishes a person. Facial appearance is vital for human recognition. It has certain features like forehead, skin, eyes, ears, nose, cheeks, mouth, lip, teeth etc which helps us, humans, to recognize a particular face from millions of faces even after a large span of time and despite large changes in their appearance due to ageing, expression, viewing conditions and distractions such as disfigurement of face, scars, beard or hair style. A face is not merely a set of facial features but is rather but is rather something meaningful in its form.In this paper, depending on the various facial features, a system is designed to recognize them. To reveal the outline of the face, eyes, ears, nose, teeth etc different edge detection techniques have been used. These features are extracted in the term of distance between important feature points. The feature set obtained is then normalized and are feed to artificial neural networks so as to train them for reorganization of facial images.


2020 ◽  
Author(s):  
Elizabeth A. Necka ◽  
Carolyn Amir ◽  
Troy C. Dildine ◽  
Lauren Yvette Atlas

There is a robust link between patients’ expectations and clinical outcomes, as evidenced by the placebo effect. These expectations are shaped by the context surrounding treatment, including the patient-provider interaction. Prior work indicates that the provider’s behavior and characteristics, including warmth and competence, can shape patient outcomes. Yet humans rapidly form trait impressions of others prior to any in-person interaction. Here, we tested whether trait-impressions of hypothetical medical providers, based purely on facial images, influence participants’ choice of medical providers and expectations about their health following hypothetical medical procedures performed by those providers in a series of vignettes. Across five studies, participants selected providers who appeared more competent, based on facial visual information alone. Further, providers’ apparent competence predicted participants’ expectations about post-procedural pain and medication use. Participants’ perception of their similarity to providers also shaped expectations about pain and treatment outcomes. Our results suggest that humans develop expectations about their health outcomes prior to even setting foot in the clinic, based exclusively on first impressions. These findings have strong implications for health care, as individuals increasingly rely on digital services to choose healthcare providers, schedule appointments, and even receive treatment and care, a trend which is exacerbated as the world embraces telemedicine.


Sign in / Sign up

Export Citation Format

Share Document