AVATAR-BASED HUMAN COMMUNICATION: A REVIEW

Jin Hou, Fang Xu, Ling Wu, and Huihui Mi

References

  1. [1] W. Steptoe and A. Steed, High-fidelity avatar eyerepresentation, Proc. IEEE Int. Conf. Virtual Reality, Nevada, USA, 2008, 111–114.
  2. [2] Z.X. Yang, L. Li, and D. Zhang, Embodiment of text based on virtual robotic avatar, Proc. IEEE Int. Conf. Robotics and Biomimetics, Sanya, China, 2007, 1285–1289.
  3. [3] B. Mu, Y.H. Yang, and J.P. Zhang, Implementation of the interactive gestures of virtual avatar based on a multi-user virtual learning environment, Proc. IEEE Int. Conf. Information Technology and Computer Science, Kiev, Ukraine, 2009, 613–617.
  4. [4] Sony’s 3-D Chat web site, http://www.so-net.ne.jp/paw/, 2007.
  5. [5] X.D. Duan and H. Liu, Detection of hand-raising gestures based on body silhouette analysis, Proc. IEEE Int. Conf. Robotics and Biomimetics, Bangkok, Thailand, 2009, 1756–1761.
  6. [6] C. Tzafestas, N. Mitsou, N. Georgakarakos, O. Diamanti, P. Maragos, S.E. Fotinea, and E. Efthimiou, Gestural teleoperation of a mobile robot based on visual recognition of sign language static handshapes, Proc. 18th IEEE Int. Symposium Robot and Human Interactive Communication, Toyama, Japan, 2009, 1073–1079.
  7. [7] X.T. Wen and Y.Y. Niu, A method for hand gesture recognition based on morphology and Fingertip-Angle, Proc. 2nd Int. Conf. Computer and Automation Engineering, Singapore, 2010, 688– 691.
  8. [8] J.J. Zhang and M.G. Zhao, A vision-based gesture recognition system for human-robot interaction, Proc. IEEE Int. Conf. Robotics and Biomimetics, Guilin, China, 2009, 2096–2101.
  9. [9] B. Mandal and H.L. Eng, Regularized discriminant analysis for holistic human activity recognition, IEEE Intelligent Systems, 27(9), 2012, 21–31.
  10. [10] L.W. Howe, F. Wong, and A. Chekima, Comparison of hand segmentation methodologies for hand gesture recognition, Proc. IEEE Int. Symposium Information Technology, Kuala Lumpur, Malaysia, 2008, 1–7.
  11. [11] M.E. Jabon, S.J. Ahn, and J.N. Bailenson, Automatically analyzing facial-feature movements to identify human errors, IEEE Intelligent Systems, 26(2), 2011, 54–63.
  12. [12] H. Zhang, D. Fricker, T.G. Smith, and C. Yu, Real-time adaptive behaviours in multimodal human-avatar interactions, Proc. Int. Conf. Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, Beijing, China, 2010, 1–8.
  13. [13] M.H. Mahoor, S. Cadavid, and M.A. Mottale, Multi-modal ear and face modeling and recognition, Proc. 16th IEEE Int. Conf. Image Processing, Cairo, Egypt, 2009, 4137–4140.
  14. [14] J. Gratch, J. Richel, E. Andre, J. Cassell, E. Petajan, and N. Badler, Creating interactive virtual humans: some assembly required, IEEE Intelligent Systems, 17(4), 2002, 54–63.
  15. [15] C.F. Chang, B.Q. Lin, Y.C. Chen, and Y.F. Chiu, Real-time soft shadow for displacement mapped surfaces, Proc. IEEE Int. Conf. Multimedia and Expo, New York, USA, 2009, 1254–1257.
  16. [16] S.Y. Kim, J.H. Cho, A. Koschan, and M.A. Abidi, 3D video generation and service based on a TOF depth sensor in MPEG-4 multimedia framework, IEEE Trans. Consumer Electronics, 56(3), 2010, 201–211.
  17. [17] C. Plagemann, V. Ganapathi, D. Koller, and S. Thrun, Realtime identification and localization of body parts from depth images, Proc. IEEE Int. Conf. Robotics and Automation, Anchorage, AK, USA, 2010, 3108–3113.
  18. [18] C. Tay and R. Green, Human motion capture and representation, Proc. IEEE Int. Conf. Image and Vision Computing, Wellington, New Zealand, 2009, 209–214.
  19. [19] S. Das, L. Trutoiu, A. Murai, D. Alcindor, M. Oh, F.D.l. Torre, and J. Hodgins, Quantitative measurement of motor symptoms in Parkinson’s disease: A study with full-body motion capture data, Proc. IEEE Int. Conf. Engineering in Medicine and Biology Society, Pittsburgh, PA, USA, 2011, 6789–6792.
  20. [20] C.K. Wan, B.Z. Yuan, and Z.J. Miao, Model-based markerless human body motion capture using multiple cameras, Proc. IEEE Int. Conf. Multimedia and Expo, Beijing, China, 2007, 1099–1102.
  21. [21] I.D. Horswill, Lightweight procedural animation with believable physical interactions, IEEE Transactions on Computational Intelligence and AI in Games, 1(1), 2009, 39–49.
  22. [22] I.J. Kim and H.S. Ko, 3D lip-synch generation with datafaithful machine learning, Computer Graphics Forum, 26(3), 2007, 295–301.
  23. [23] Q. Gu, J.L. Zhou, and K. Ouyang, An approach of scalable MPEF-4 video bitstreams with network coding for P2P swarming system, Proc. IEEE Int. Conf. Networking, Architecture, and Storage, Hunan, China, 2009, 239–242.
  24. [24] J.N. Bailenson and J. Blascovich, This is your mind online, IEEE Spectrum, 48(6), 2011, 78–81.
  25. [25] R.D. Gawali, and B.B. Meshram, Agent-based autonomous examination systems, Proc. Int. Conf. Intelligent Agent and Multi-Agent Systems, Chennai, India, 2009, 1–7.
  26. [26] W.M. Wang, X.Q. Yan, Y.M. Xie, J. Qin, W.M. Pang, and Pheng-Ann Heng, A physically-based modeling and simulation framework for facial animation, Proc. ICIG. Int. Conf. Image and Graphics, Xi’an, China, 2009, 521–526.
  27. [27] W.H. Yu, Online shopping assistant based on multi BDI agent, Proc. IEEE Int. Conf. E-Business and Information System Security, Wuhan, China, 2009, 1–4.
  28. [28] S. Marcos, J.G. Bermejo, and E. Zalama, A realistic facial animation suitable for human–robot interfacing, Proc. IEEE Int. Conf. Intelligent Robots and Systems, Nice, France, 2008, 3810–3815.
  29. [29] X.H. Ma and Z.G. Deng, Natural eye motion synthesis by modeling gaze-head coupling, Proc. IEEE Int. Conf. Virtual Reality, Lafayette, LA, USA, 2009, 143–150.
  30. [30] N. Nadtoka, J.R. Tena, Hilton, and J.E.A., High-resolution animation of facial dynamics, IETCVMP Conf. Visual Media Production, London, England, 2007, 1–10. 277
  31. [31] K.G. Oh, C.Y. Jung, Y.G. Lee, and S.J. Kim, Real-time lip synchronization between text-to-speech (TTS) system and robot mouth, IEEE Int. Symp. Robot and Human Interactive Communication, Viareggio, Italy, 2010, 620–625.
  32. [32] M. Fabri, Salima Y. Awad Elzouki, and D.J. Moore, Emotionally expressive avatars for chatting, learning and therapeutic intervention, Human-Computer Interaction, Part III, 45(52), 2007, 461–475.
  33. [33] J. Hou, X. Wang, F. Xu, V.D. Nguyen, and L. Wu, Humanoid personalized avatar through multiple natural language processing, World Academy of Science, Engineering and Technology, 59(2), 2009, 230–235.
  34. [34] J.H. Janssen, J.N. Bailenson, W.A. Ijsselsteijn, and H. Joyce, Intimate heartbeats: Opportunities for affective communication technology, IEEE Transactions on Affective Computing, 1(2), 2010, 72–80.
  35. [35] Y. Jung, Moderating effects of social presence on behavioural conformation in virtual reality eEnvironments: A comparison between social presence and identification, Proc. 12th Annual International Workshop Presence, Los Angeles, CA, USA, 2009, 1–6.
  36. [36] T. Carrigy, K. Naliuka, N. Paterson, and M. Haahr, Design and evaluation of player experience of a location-based mobile game, Proc. 6th Nordic Conf. Human–Computer Interaction, Reykjavik, Iceland, 2010, 92–101.
  37. [37] E. Bevacqua, A listening agent exhibiting variable behaviour, Proc. Intelligent Virtual Agents, Tokyo, Japan, 2008, 262–269.
  38. [38] R. Pirrone, V. Cannella, and G. Russo, GAIML: A new language for verbal and graphical interaction in chatbots, CISIS Int. Conf., Barcelona, Spain, 2008, 715–720.
  39. [39] The Virtual Human MarkupLanguage,http://www.vhml.org/, 2007.

Important Links:

Go Back