Luis Almeida, Paulo J. Menezes, Lakmal D. Seneviratne, and Jorge M.M. Dias
3D reconstruction, augmented reality, human robot interaction, tele-presence
This research proposes an on-line incremental 3D reconstruction framework that can be used on human machine interaction (HMI) or augmented reality (AR) applications. There is a wide variety of research opportunities including high performance imaging, multi-view video, virtual view synthesis, etc. One fundamental challenge in geometry reconstruction from traditional cameras array is the lack of accuracy in low-texture or repeated pattern region. Our approach explores virtual view synthesis through motion body estimation and hybrid sensors composed by video cameras and a depth camera based on structured-light or time-of-flight. We present a full 3D body reconstruction system that combines visual features and shape-based alignment. The proposed mesh generation algorithm is based on Crust and efficiently adds new vertices to an already existing surface. Modeling is based on meshes computed from dense depth maps in order lower the data to be processed and create a 3D mesh representation that is independent of view-point.
Important Links:
Go Back