morphable model
Recently Published Documents


TOTAL DOCUMENTS

120
(FIVE YEARS 4)

H-INDEX

18
(FIVE YEARS 0)

2021 ◽  
Vol 108 (Supplement_6) ◽  
Author(s):  
A Ramjeeawon ◽  
L van de Lande ◽  
E O'Sullivan ◽  
K Bloch ◽  
R Khonsari ◽  
...  

Abstract Aim Assess the three-dimensional Morphable Model (3DMM) of the Apert mandible, investigate differences between sex and age, and characterise growth by age. Additionally, compare with a healthy mandible 3DMM. Method High-quality CT scans of children with Apert’s Syndrome (without previous mandibular surgery) between November1987-January2020 were sourced from Great Ormond Street (GOSH) and Necker Hospitals. DICOM files were constructed to 3D meshes through isolation of mandibles and artifact removal (MeshMixer, Mimics) and annotation using standardized landmarks (Wrapped). A 3DMM was constructed using an existing pipeline, and experiments performed to compare with the healthy mandible 3DMM, investigating differences between sex and age, and to characterise growth by age. A healthy mandible 3DMM has been created by our team using healthy mandible CT scans sourced from a GOSH database. Results A 3DMM of the unoperated Apert mandible was successfully constructed from 276 Apert CT scans, male=137 (aged0-20), female=139 (aged0-23), and the first components of the morphable model identified. Conclusions Apert’s Syndrome is a rare genetic condition, with characteristic extremity (syndactyly) and craniofacial features (craniosynostosis), however breathing problems, sleep apnoea, relative prognathism and Angle class III malocclusion have been reported. Few studies have analysed the potential role of the Apert mandible. 3DMMs are statistical tools used to represent 3D shapes and have been used to create shape and texture parameters for anatomical areas. The 3DMM of the unoperated Apert mandible has potential applications for further understanding of Apert’s Syndrome, diagnostic purposes and may be used to develop further management of these patients, such as surgical planning.



Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 589
Author(s):  
Luigi Ariano ◽  
Claudio Ferrari ◽  
Stefano Berretti ◽  
Alberto Del Bimbo

Facial Action Units (AUs) correspond to the deformation/contraction of individual facial muscles or their combinations. As such, each AU affects just a small portion of the face, with deformations that are asymmetric in many cases. Generating and analyzing AUs in 3D is particularly relevant for the potential applications it can enable. In this paper, we propose a solution for 3D AU detection and synthesis by developing on a newly defined 3D Morphable Model (3DMM) of the face. Differently from most of the 3DMMs existing in the literature, which mainly model global variations of the face and show limitations in adapting to local and asymmetric deformations, the proposed solution is specifically devised to cope with such difficult morphings. During a training phase, the deformation coefficients are learned that enable the 3DMM to deform to 3D target scans showing neutral and facial expression of the same individual, thus decoupling expression from identity deformations. Then, such deformation coefficients are used, on the one hand, to train an AU classifier, on the other, they can be applied to a 3D neutral scan to generate AU deformations in a subject-independent manner. The proposed approach for AU detection is validated on the Bosphorus dataset, reporting competitive results with respect to the state-of-the-art, even in a challenging cross-dataset setting. We further show the learned coefficients are general enough to synthesize realistic 3D face instances with AUs activation.



2021 ◽  
pp. 94-105
Author(s):  
Minghui Wang ◽  
Zhilei Liu


2020 ◽  
pp. 1-1
Author(s):  
Kun-Chan Lan ◽  
Min-Chun Hu ◽  
Yi-Zhang Chen ◽  
Jun-Xiang Zhang ◽  
Jun-Xiang Zhang


Author(s):  
Stylianos Ploumpis ◽  
Evangelos Ververas ◽  
Eimear O' Sullivan ◽  
Stylianos Moschoglou ◽  
Haoyang Wang ◽  
...  


2019 ◽  
Vol 128 ◽  
pp. 378-384
Author(s):  
Hang Dai ◽  
Nick Pears ◽  
William Smith


2019 ◽  
Vol 128 (2) ◽  
pp. 547-571 ◽  
Author(s):  
Hang Dai ◽  
Nick Pears ◽  
William Smith ◽  
Christian Duncan

Abstract We present a fully-automatic statistical 3D shape modeling approach and apply it to a large dataset of 3D images, the Headspace dataset, thus generating the first public shape-and-texture 3D morphable model (3DMM) of the full human head. Our approach is the first to employ a template that adapts to the dataset subject before dense morphing. This is fully automatic and achieved using 2D facial landmarking, projection to 3D shape, and mesh editing. In dense template morphing, we improve on the well-known Coherent Point Drift algorithm, by incorporating iterative data-sampling and alignment. Our evaluations demonstrate that our method has better performance in correspondence accuracy and modeling ability when compared with other competing algorithms. We propose a texture map refinement scheme to build high quality texture maps and texture model. We present several applications that include the first clinical use of craniofacial 3DMMs in the assessment of different types of surgical intervention applied to a craniosynostosis patient group.



Sign in / Sign up

Export Citation Format

Share Document