ROBUST TARGET TRACKING AND FOLLOWING FOR A MOBILE ROBOT

Kang Li, Xiaoguang Zhao, Shiying Sun, and Min Tan

References

  1. [1] C. H. Lin and K. T. Song, An interactive control architecture for mobile robots, International Journal of Robotics and Automation, 28(1), 2013, 1–12.
  2. [2] T. Huang, P. Yang, K. Yang, et al., Navigation of mobile robot in unknown environment based on T–sneuro-fuzzysystem, International Journal of Robotics and Automation, 30(4), 2015, 384–396.
  3. [3] M. D. Berkemeier and L. Ma, Visual servoing an omni directional mobile robot to parking lot lines1, International Journal of Robotics and Automation, 29(1), 2014, 67–80.
  4. [4] J. Li, J. Wang, S. X. Yang, et al., Slam based on information fusion of stereo vision and electronic compass, International Journal of Robotics and Automation, 31(3), 2016, 243–250.
  5. [5] X. Li, W. Hu, C. Shen, et al., A survey of appearance models in visual object tracking, ACM transactions on Intelligent Systems and Technology (TIST), 4(4), 2013, 58.
  6. [6] Y. LeCun, Y. Bengio, and G. Hinton, Deep learning, Nature, 521(7553), 2015, 436–444.
  7. [7] A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, Lake Tahoe, 2012, 1097–1105.
  8. [8] J. Donahue, Y. Jia, O. Vinyals, et al., DeCAF: A deep convolutional activation feature for generic visual recognition, International Conference on Machine Learning (ICML), Beijing, 2014, 647–655.
  9. [9] R. Girshick, J. Donahue, T. Darrell, et al., Rich feature hierarchies for accurate object detection and semantic segmentation, Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, 2014, 580–587.
  10. [10] Y. Wu, J. Lim, and M. H. Yang, Online object tracking: A benchmark, 2013 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’13), Portland, 2013, 2411–2418.
  11. [11] D. Ross, J. Lim, R. S. Lin, and M. H. Yang, Incremental learning for robust visual tracking, International Journal of Computer Vision, 77(1–3), 2008, 125–141.
  12. [12] J. Wright, A. Y. Yang, A. Ganesh, et al., Robust face recognition via sparse representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2), 2009, 210–227.
  13. [13] X. Mei and H. Ling, Robust visual tracking using L1 minimization, 2009 IEEE 12th Int. Conf. on Computer Vision, Kyoto, 2009, 1436–1443.
  14. [14] C. Bao, Y. Wu, H. Ling, et al., Real time robust l1 tracker using accelerated proximal gradient approach, 2012 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’12), Rhode Island, 2012, 1830–1837.
  15. [15] D. Wang, H. Lu, and M. H. Yang, Online object tracking with sparse prototypes, IEEE Transactions on Image Processing, 22(1), 2013, 314–325.
  16. [16] D. Wang, H. Lu, and M. H. Yang, Robust visual tracking via least soft-threshold squares, IEEE Transactions on Circuits and Systems for Video Technology, 26(9), 2016, 1709–1721.
  17. [17] D. Wang, H. Lu, Z. Xiao, et al., Inverse sparse tracker with a locally weighted distance metric, IEEE Transactions on Image Processing, 24(9), 2015, 2646–2657.
  18. [18] H. Li, Y. Li, and F. Porikli, Robust online visual tracking with a single convolutional neural network, Asian Conference on Computer Vision (Cham: Springer, 2014), 194–209.
  19. [19] N. Wang and D. Yeung, Learning a deep compact image representation for visual tracking, Advances in Neural Information Processing Systems, Lake Tahoe, 2013, 809–817.
  20. [20] S. Hong, T. You, S. Kwak, et al., Online tracking by learning discriminative saliency map with convolutional neural network, International Conference on Machine Learning (ICML), Lille, 2015, 597–606.
  21. [21] C. Ma, J. B. Huang, X. Yang, et al., Hierarchical convolutional features for visual tracking, Proceedings of the IEEE Int. Conf. on Computer Vision, Santiago, 2015, 3074–3082.
  22. [22] L. Wang, H. Lu, X. Ruan, et al., Deep networks for saliency detection via local estimation and global search, Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Boston, 2015, 3183–3192.
  23. [23] H. Grabner, C. Leistner, and H. Bischof, Semi-supervised online boosting for robust tracking, European Conf. on Computer Vision (Springer Berlin Heidelberg, 2008), 234–247.
  24. [24] S. Avidan, Ensemble tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(2), 2007, 261–271.
  25. [25] S. Hare, A. Saffari, and P. Torr, Struck: Structured output tracking with kernels, 2011 IEEE International Conf. on Computer Vision, Barcelona, 2011, 263–270.
  26. [26] B. Babenko, M. H. Yang, and S. Belongie, Robust object tracking with online multiple instance learning, IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(8), 2011, 1619–1632.
  27. [27] W. Huang, J. Gu, and X. Ma, Compressive sensing with weighted local classifiers for robot visual tracking, International Journal of Robotics and Automation, 31(5), 2016, 416–427.
  28. [28] K. Zhang, L. Zhang, Q. Liu, et al., Fast visual tracking via dense spatio-temporal context learning, European Conf. on Computer Vision (Cham: Springer, 2014), 127–141.
  29. [29] J. F. Henriques, R. Caseiro, P. Martins, et al., High-speed tracking with kernelized correlation filters, IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3), 2015, 583–596.
  30. [30] J. Satake and J. Miura, Robust stereo-based person detection and tracking for a person following robot, ICRA Workshop on People Detection and Tracking, Kobe, 2009, 1–10.
  31. [31] S. Jia, X. Xuan, T. Xu, et al., Target tracking for mobile robot based on spatio-temporal context model, 2015 IEEE Int. Conf. on Robotics and Biomimetics (ROBIO), 2015, 976–981.
  32. [32] https://en.wikipedia.org/wiki/Convolution_theorem
  33. [33] J. W. Cooley and J. W. Tukey, An algorithm for the machine calculation of complex Fourier series, Mathematics of Computation, 19(90), 1965, 297–301.
  34. [34] N. Dalal and B. Triggs, Histograms of oriented gradients for human detection, 2005 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR’05), Vol. 1, San Diego, 2005, 886–893.
  35. [35] A. Visioli, Practical PID control (London: Springer Science & Business Media, 2006)
  36. [36] A. Whitbrook, Programming mobile robots with aria and player: A guide to C++ object-oriented control (London: Springer Science & Business Media, 2009).
  37. [37] X. Jia, H. Lu, and M. H. Yang, Visual tracking via adaptive structural local sparse appearance model, 2012 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’12), Rhode Island, 2012, 1822–1829.
  38. [38] Z. Kalal, J. Matas, and K. Mikolajczyk, P-n learning: Bootstrapping binary classifier by structural constraints, 2010 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’10), San Francisco, 2010, 49–56.
  39. [39] W. Zhong, H. Lu, and M. H. Yang, Robust object tracking via sparsity-based collaborative model, 2012 IEEE Conf. on Computer Vision and Pattern Recognition(CVPR’12), Rhode Island, 2012, 1838–1845.
  40. [40] T. B. Dinh, N. Vo, and G. Medioni, Context tracker: Exploring supporters and distracters in unconstrained environments, 2011 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’11), Colorado Springs, 2011, 1177–1184.
  41. [41] J. Kwon and K. M. Lee, Tracking by sampling trackers, 2011 IEEE International Conf. on Computer Vision, Barcelona, 2011, 1195–1202.
  42. [42] J. Kwon and K. M. Lee, Visual tracking decomposition, 2010 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’10), San Francisco, 2010, 1269–1276.
  43. [43] B. Liu, J. Huang, L. Yang, and C. A. Kulikowski, Robust tracking using local sparse appearance model and K-selection, 2011 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’11), Colorado Springs, 2011, 1313–1320.
  44. [44] K. Li, X. Zhao, Z. Sun, and M. Tan, Robust target detection, tracking and following for an indoor mobile robot, 2017 IEEE Conf. on Robotics and Biomimetics (ROBIO), Macao, 2017, 593–598.

Important Links:

Go Back