Development of a visual speech synthesizer via second-order isomorphism

Author(s):  
Jintao Jiang ◽  
Justin M. Aronoff ◽  
Lynne E. Bernstein
2001 ◽  
Author(s):  
Jay J. Williams ◽  
Aggelos K. Katsaggelos ◽  
Dean C. Garstecki

1998 ◽  
Vol 21 (4) ◽  
pp. 484-493
Author(s):  
Shimon Edelman

Proximal mirroring of distal similarities is, at present, the only solution to the problem of representation that is both theoretically sound (for reasons discussed in the target article) and practically feasible (as attested by the performance of the Chorus model). Augmenting the latter by a capability to refer selectively to retinotopically defined object fragments should lead to a comprehensive theory of shape processing.


2002 ◽  
Vol 25 (2) ◽  
pp. 182-183 ◽  
Author(s):  
Hedy Amiri ◽  
Chad J. Marsolek

According to Pylyshyn, depictive representations can be explanatory only if a certain kind of first-order isomorphism exists between the mental representations and real-world displays. What about a system with second-order isomorphism (similarities between different mental representations corresponding with similarities between different real-world displays)? Such a system may help to address whether “depictive” representations contribute to the visual nature of imagery.


2001 ◽  
Vol 01 (01) ◽  
pp. 19-26 ◽  
Author(s):  
PENGYU HONG ◽  
ZHEN WEN ◽  
THOMAS S. HUANG

We present the iFACE system, a visual speech synthesizer that provides a form of virtual face-to-face communication. The system provides an interactive tool for the user to customize a graphic head model for the virtual agent of a person based on his/her range data. The texture is mapped onto the customized model to achieve a realistic appearance. Face animations are produced by using text stream or speech stream to drive the model. A set of basic facial shapes and head action is manually built and used to synthesize expressive visual speech based on rules.


Sign in / Sign up

Export Citation Format

Share Document