S.-W. Kim, K.-H. Choi, J.-W. Moon, and S.-Y. Park (Korea)
Facial animation, Talking head, Mobile TTS, MPEG4
The design architecture of automatic creation and rendering of a talking head between server and client PDAs are presented. Especially, a talking head system (THS) for a PDA mobile platform is developed to animate the face of a speaking 3D avatar as if a person is speaking in the real world. To create a 3D model automatically, the proposed system employs an automatic feature extraction scheme. We propose a boundary extraction method based on a pseudo moving difference and an algorithm for facial shape extraction using an ellipse model controlled by three anchor points. Then the extracted features such as the shape of a face, locations of eyes, nose and mouth, and size of these features are sent to the 3D model server to build a talking head in full conformity with MPEG-4 specifications. The server PDA activates the SAPI compliant text-to-speech (TTS) engine and MPEG-4 compliant face animation generator after receiving texts with emotion codes from the client PDA. The experimental results show that our system has great potential for the implementation of talking head for Korean text in a mobile platform.
Important Links:
Go Back