T. Nakajima (Japan)
Computer Vision, Memory-based State Prediction, Particle Filter
Conventional model-based visual tracking assumes a math ematical state prediction model in advance. Thanks to the prediction model, a tracker can locate a target in visual clut ters. However, if the target moves against the pre-defined prediction model, the tracker can easily miss the target. To overcome this problem, we introduce memory-based state prediction that a tracker can learn object’s motion on-the-fly at the tracking. In addition, we propose a new framework in visual object tracking which integrates the memory-based state prediction into a conventional math ematical based state prediction. Our experiments suggest that our new framework permits a visual tracker to learn and track unexpected motion in the real world.
Important Links:
Go Back