M. Vajaš and G. Rozinaj (Slovakia)
Talking head, speech synthesis, multimodal user interface,face animation
This paper describes how to analyze, design and implement a simulator of human facial mimics for a text to-speech synthesis system. It demonstrates how to solve such problems as a mathematical representation of the human face, image-voice synchronization, emotion animation and creating the simulation using minimal redundant data volume. As a final implementation, the functionality and capabilities of this proposal is shown in a MorphyGL application. An implemented solution is based on a method known as weighted morphing. Several characteristic positions of a face called "key-frames" are presented. The animation is then processed as a morphing between key-frames according to a defined interpolating function. Finally a weight value is used to adjust the final output. According to the results of this work a conclusion and a vision for future projects is made at the end of this paper.
Important Links:
Go Back