AVATAR-BASED HUMAN COMMUNICATION: A REVIEW

Jin Hou, Fang Xu, Ling Wu, and Huihui Mi

References

  1. [1] W. Steptoe and A. Steed, High-fidelity avatar eye-representation, Proc. IEEE Int. Conf. Virtual Reality, Nevada,USA, 2008, 111–114.
  2. [2] Z.X. Yang, L. Li, and D. Zhang, Embodiment of text basedon virtual robotic avatar, Proc. IEEE Int. Conf. Robotics andBiomimetics, Sanya, China, 2007, 1285–1289.
  3. [3] B. Mu, Y.H. Yang, and J.P. Zhang, Implementation of theinteractive gestures of virtual avatar based on a multi-uservirtual learning environment, Proc. IEEE Int. Conf. Informa-tion Technology and Computer Science, Kiev, Ukraine, 2009,613–617.
  4. [4] Sony’s 3-D Chat web site, http://www.so-net.ne.jp/paw/, 2007.
  5. [5] X.D. Duan and H. Liu, Detection of hand-raising gestures basedon body silhouette analysis, Proc. IEEE Int. Conf. Roboticsand Biomimetics, Bangkok, Thailand, 2009, 1756–1761.
  6. [6] C. Tzafestas, N. Mitsou, N. Georgakarakos, O. Diamanti,P. Maragos, S.E. Fotinea, and E. Efthimiou, Gestural tele-operation of a mobile robot based on visual recognition ofsign language static handshapes, Proc. 18th IEEE Int. Sympo-sium Robot and Human Interactive Communication, Toyama,Japan, 2009, 1073–1079.
  7. [7] X.T. Wen and Y.Y. Niu, A method for hand gesture recognitionbased on morphology and Fingertip-Angle, Proc. 2nd Int. Conf.Computer and Automation Engineering, Singapore, 2010, 688–691.
  8. [8] J.J. Zhang and M.G. Zhao, A vision-based gesture recognitionsystem for human-robot interaction, Proc. IEEE Int. Conf.Robotics and Biomimetics, Guilin, China, 2009, 2096–2101.
  9. [9] B. Mandal and H.L. Eng, Regularized discriminant analysis forholistic human activity recognition, IEEE Intelligent Systems,27(9), 2012, 21–31.
  10. [10] L.W. Howe, F. Wong, and A. Chekima, Comparison of handsegmentation methodologies for hand gesture recognition, Proc.IEEE Int. Symposium Information Technology, Kuala Lumpur,Malaysia, 2008, 1–7.
  11. [11] M.E. Jabon, S.J. Ahn, and J.N. Bailenson, Automaticallyanalyzing facial-feature movements to identify human errors,IEEE Intelligent Systems, 26(2), 2011, 54–63.
  12. [12] H. Zhang, D. Fricker, T.G. Smith, and C. Yu, Real-timeadaptive behaviours in multimodal human-avatar interactions,Proc. Int. Conf. Multimodal Interfaces and the Workshop onMachine Learning for Multimodal Interaction, Beijing, China,2010, 1–8.
  13. [13] M.H. Mahoor, S. Cadavid, and M.A. Mottale, Multi-modalear and face modeling and recognition, Proc. 16th IEEE Int.Conf. Image Processing, Cairo, Egypt, 2009, 4137–4140.
  14. [14] J. Gratch, J. Richel, E. Andre, J. Cassell, E. Petajan, andN. Badler, Creating interactive virtual humans: some assemblyrequired, IEEE Intelligent Systems, 17(4), 2002, 54–63.
  15. [15] C.F. Chang, B.Q. Lin, Y.C. Chen, and Y.F. Chiu, Real-timesoft shadow for displacement mapped surfaces, Proc. IEEE Int.Conf. Multimedia and Expo, New York, USA, 2009, 1254–1257.
  16. [16] S.Y. Kim, J.H. Cho, A. Koschan, and M.A. Abidi, 3D videogeneration and service based on a TOF depth sensor in MPEG-4multimedia framework, IEEE Trans. Consumer Electronics,56(3), 2010, 201–211.
  17. [17] C. Plagemann, V. Ganapathi, D. Koller, and S. Thrun, Real-time identification and localization of body parts from depthimages, Proc. IEEE Int. Conf. Robotics and Automation,Anchorage, AK, USA, 2010, 3108–3113.
  18. [18] C. Tay and R. Green, Human motion capture and represen-tation, Proc. IEEE Int. Conf. Image and Vision Computing,Wellington, New Zealand, 2009, 209–214.
  19. [19] S. Das, L. Trutoiu, A. Murai, D. Alcindor, M. Oh, F.D.l. Torre,and J. Hodgins, Quantitative measurement of motor symptomsin Parkinson’s disease: A study with full-body motion capturedata, Proc. IEEE Int. Conf. Engineering in Medicine andBiology Society, Pittsburgh, PA, USA, 2011, 6789–6792.
  20. [20] C.K. Wan, B.Z. Yuan, and Z.J. Miao, Model-based markerlesshuman body motion capture using multiple cameras, Proc.IEEE Int. Conf. Multimedia and Expo, Beijing, China, 2007,1099–1102.
  21. [21] I.D. Horswill, Lightweight procedural animation with believablephysical interactions, IEEE Transactions on ComputationalIntelligence and AI in Games, 1(1), 2009, 39–49.
  22. [22] I.J. Kim and H.S. Ko, 3D lip-synch generation with data-faithful machine learning, Computer Graphics Forum, 26(3),2007, 295–301.
  23. [23] Q. Gu, J.L. Zhou, and K. Ouyang, An approach of scalableMPEF-4 video bitstreams with network coding for P2P swarm-ing system, Proc. IEEE Int. Conf. Networking, Architecture,and Storage, Hunan, China, 2009, 239–242.
  24. [24] J.N. Bailenson and J. Blascovich, This is your mind online,IEEE Spectrum, 48(6), 2011, 78–81.
  25. [25] R.D. Gawali, and B.B. Meshram, Agent-based autonomousexamination systems, Proc. Int. Conf. Intelligent Agent andMulti-Agent Systems, Chennai, India, 2009, 1–7.
  26. [26] W.M. Wang, X.Q. Yan, Y.M. Xie, J. Qin, W.M. Pang, andPheng-Ann Heng, A physically-based modeling and simulationframework for facial animation, Proc. ICIG. Int. Conf. Imageand Graphics, Xi’an, China, 2009, 521–526.
  27. [27] W.H. Yu, Online shopping assistant based on multi BDI agent,Proc. IEEE Int. Conf. E-Business and Information SystemSecurity, Wuhan, China, 2009, 1–4.
  28. [28] S. Marcos, J.G. Bermejo, and E. Zalama, A realistic facialanimation suitable for human–robot interfacing, Proc. IEEEInt. Conf. Intelligent Robots and Systems, Nice, France, 2008,3810–3815.
  29. [29] X.H. Ma and Z.G. Deng, Natural eye motion synthesis bymodeling gaze-head coupling, Proc. IEEE Int. Conf. VirtualReality, Lafayette, LA, USA, 2009, 143–150.
  30. [30] N. Nadtoka, J.R. Tena, Hilton, and J.E.A., High-resolutionanimation of facial dynamics, IETCVMP Conf. Visual MediaProduction, London, England, 2007, 1–10.277
  31. [31] K.G. Oh, C.Y. Jung, Y.G. Lee, and S.J. Kim, Real-timelip synchronization between text-to-speech (TTS) system androbot mouth, IEEE Int. Symp. Robot and Human InteractiveCommunication, Viareggio, Italy, 2010, 620–625.
  32. [32] M. Fabri, Salima Y. Awad Elzouki, and D.J. Moore, Emotion-ally expressive avatars for chatting, learning and therapeuticintervention, Human-Computer Interaction, Part III, 45(52),2007, 461–475.
  33. [33] J. Hou, X. Wang, F. Xu, V.D. Nguyen, and L. Wu, Humanoidpersonalized avatar through multiple natural language process-ing, World Academy of Science, Engineering and Technology,59(2), 2009, 230–235.
  34. [34] J.H. Janssen, J.N. Bailenson, W.A. Ijsselsteijn, and H. Joyce,Intimate heartbeats: Opportunities for affective communica-tion technology, IEEE Transactions on Affective Computing,1(2), 2010, 72–80.
  35. [35] Y. Jung, Moderating effects of social presence on behaviouralconformation in virtual reality eEnvironments: A comparisonbetween social presence and identification, Proc. 12th AnnualInternational Workshop Presence, Los Angeles, CA, USA,2009, 1–6.
  36. [36] T. Carrigy, K. Naliuka, N. Paterson, and M. Haahr, Designand evaluation of player experience of a location-based mobilegame, Proc. 6th Nordic Conf. Human–Computer Interaction,Reykjavik, Iceland, 2010, 92–101.
  37. [37] E. Bevacqua, A listening agent exhibiting variable behaviour,Proc. Intelligent Virtual Agents, Tokyo, Japan, 2008, 262–269.
  38. [38] R. Pirrone, V. Cannella, and G. Russo, GAIML: A newlanguage for verbal and graphical interaction in chatbots,CISIS Int. Conf., Barcelona, Spain, 2008, 715–720.
  39. [39] The Virtual Human MarkupLanguage,http://www.vhml.org/,2007.

Important Links:

Go Back