facial animation
Recently Published Documents


TOTAL DOCUMENTS

489
(FIVE YEARS 58)

H-INDEX

34
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Sewhan Chun ◽  
Daegeun Choe ◽  
Shindong Kang ◽  
Shounan An ◽  
Youngbak Jo ◽  
...  
Keyword(s):  

2021 ◽  
Vol 40 (6) ◽  
pp. 1-18
Author(s):  
Lucio Moser ◽  
Chinyu Chien ◽  
Mark Williams ◽  
Jose Serra ◽  
Darren Hendler ◽  
...  
Keyword(s):  

2021 ◽  
Vol 18 (1) ◽  
pp. 28-37
Author(s):  
Samia Shakir ◽  
Ali Al-Azza

Animating human face presents interesting challenges because of its familiarity as the face is the part utilized to recognize individuals. This paper reviewed the approaches used in facial modeling and animation and described their strengths and weaknesses. Realistic face animation of computer graphic models of human faces can be hard to achieve as a result of the many details that should be approximated in producing realistic facial expressions. Many methods have been researched to create more and more accurate animations that can efficiently represent human faces. We described the techniques that have been utilized to produce realistic facial animation. In this survey, we roughly categorized the facial modeling and animation approach into the following classes: blendshape or shape interpolation, parameterizations, facial action coding system-based approaches, moving pictures experts group-4 facial animation, physics-based muscle modeling, performance driven facial animation, visual speech animation.


2021 ◽  
Author(s):  
◽  
Snehal Poojary

<p>Numerous studies over the past decade have investigated to making human animation as realistic as possible, especially facial animation. Let’s consider facial animation for human speech. Animating a face, to match up to a speech, requires a lot of effort. Most of the process has now been automated to make it easier for the artist to create facial animation along with lip sync based on a speech provided by the user. While these systems concentrate on the mouth and tongue, where articulation of speech takes place, very little effort has gone to understand and to recreate the exact motion of the neck during speech. The neck plays an important role in voice production and hence it is essential to study the motion created by it.  The purpose of this research is to study the motion of the neck during speech. This research makes two contributions. First, predicting the motion of the neck around the strap muscles for a given speech. This is achieved by training a program with position data of marker placed on the neck along with its speech analysis data. Second, understanding the basic neck motion during speech. This will help an artist understand how the neck should be animated during speech.</p>


2021 ◽  
Author(s):  
◽  
Snehal Poojary

<p>Numerous studies over the past decade have investigated to making human animation as realistic as possible, especially facial animation. Let’s consider facial animation for human speech. Animating a face, to match up to a speech, requires a lot of effort. Most of the process has now been automated to make it easier for the artist to create facial animation along with lip sync based on a speech provided by the user. While these systems concentrate on the mouth and tongue, where articulation of speech takes place, very little effort has gone to understand and to recreate the exact motion of the neck during speech. The neck plays an important role in voice production and hence it is essential to study the motion created by it.  The purpose of this research is to study the motion of the neck during speech. This research makes two contributions. First, predicting the motion of the neck around the strap muscles for a given speech. This is achieved by training a program with position data of marker placed on the neck along with its speech analysis data. Second, understanding the basic neck motion during speech. This will help an artist understand how the neck should be animated during speech.</p>


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Xiaowen Shan ◽  
Hao Chen

When traditional methods integrate popular science microanimation works, the integration effect of the animation works is not good. In this paper, we propose an automatic integration algorithm of popular science microanimation works in the context of new media. The system first analyzes the characteristics of the new media context and gives the meaning of microanimation in the context of new media. It simplifies the edge folding of popular science microanimation integration and calculates the Facial Animation Parameter (FAP) value to realize the automatic integration of popular science microanimation works. We conducted a number of experiments using various size datasets to test the proposed system. We achieved an average integration accuracy of 96.3% with datasets of 500 to 3000 animation works, having the highest accuracy of 99% with a dataset of 500 animation works. On the other hand, the integration time of the animation works was recorded just 1.25 seconds with a dataset of 3000 animation works which is much lower than the existing work.


2021 ◽  
Author(s):  
Artur Tavares de Carvalho Cruz ◽  
Joao Marcelo Xavier Natario Teixeira
Keyword(s):  

Author(s):  
Li Quan ◽  
Haiyi Zhang
Keyword(s):  

2021 ◽  
Author(s):  
Chun-Ming Huang ◽  
Hong-Yi Pai
Keyword(s):  

2021 ◽  
Author(s):  
Jiajun Huang ◽  
Xueyu Wang ◽  
Bo Du ◽  
Pei Du ◽  
Chang Xu
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document