Kang Li, Xiaoguang Zhao, Shiying Sun, and Min Tan
[1] C. H. Lin and K. T. Song, An interactive control architecture for mobile robots, International Journal of Robotics andAutomation, 28(1), 2013, 1–12. [2] T. Huang, P. Yang, K. Yang, et al., Navigation of mobilerobot in unknown environment based on T–sneuro-fuzzysystem,International Journal of Robotics and Automation, 30(4), 2015,384–396. [3] M. D. Berkemeier and L. Ma, Visual servoing an omni directional mobile robot to parking lot lines1, International Journal of Robotics and Automation, 29(1), 2014, 67–80. [4] J. Li, J. Wang, S. X. Yang, et al., Slam based on informationfusion of stereo vision and electronic compass, InternationalJournal of Robotics and Automation, 31(3), 2016, 243–250. [5] X. Li, W. Hu, C. Shen, et al., A survey of appearance modelsin visual object tracking, ACM transactions on IntelligentSystems and Technology (TIST), 4(4), 2013, 58. [6] Y. LeCun, Y. Bengio, and G. Hinton, Deep learning, Nature,521(7553), 2015, 436–444. [7] A. Krizhevsky, I. Sutskever, and G. E. Hinton, Imagenet classification with deep convolutional neural networks, Advancesin Neural Information Processing Systems, Lake Tahoe, 2012,1097–1105. [8] J. Donahue, Y. Jia, O. Vinyals, et al., DeCAF: A deep convolutional activation feature for generic visual recognition, International Conference on Machine Learning (ICML), Beijing,2014, 647–655. [9] R. Girshick, J. Donahue, T. Darrell, et al., Rich feature hierarchies for accurate object detection and semantic segmentation,Proc. of the IEEE Conf. on Computer Vision and PatternRecognition, Columbus, 2014, 580–587. [10] Y. Wu, J. Lim, and M. H. Yang, Online object tracking: Abenchmark, 2013 IEEE Conf. on Computer Vision and PatternRecognition (CVPR’13), Portland, 2013, 2411–2418. [11] D. Ross, J. Lim, R. S. Lin, and M. H. Yang, Incrementallearning for robust visual tracking, International Journal ofComputer Vision, 77(1–3), 2008, 125–141. [12] J. Wright, A. Y. Yang, A. Ganesh, et al., Robust face recognition via sparse representation, IEEE Transactions on PatternAnalysis and Machine Intelligence, 31(2), 2009, 210–227. [13] X. Mei and H. Ling, Robust visual tracking using L1 minimization, 2009 IEEE 12th Int. Conf. on Computer Vision,Kyoto, 2009, 1436–1443. [14] C. Bao, Y. Wu, H. Ling, et al., Real time robust l1 tracker usingaccelerated proximal gradient approach, 2012 IEEE Conf. onComputer Vision and Pattern Recognition (CVPR’12), RhodeIsland, 2012, 1830–1837. [15] D. Wang, H. Lu, and M. H. Yang, Online object tracking withsparse prototypes, IEEE Transactions on Image Processing,22(1), 2013, 314–325. [16] D. Wang, H. Lu, and M. H. Yang, Robust visual tracking vialeast soft-threshold squares, IEEE Transactions on Circuitsand Systems for Video Technology, 26(9), 2016, 1709–1721. [17] D. Wang, H. Lu, Z. Xiao, et al., Inverse sparse tracker with alocally weighted distance metric, IEEE Transactions on ImageProcessing, 24(9), 2015, 2646–2657. [18] H. Li, Y. Li, and F. Porikli, Robust online visual tracking witha single convolutional neural network, Asian Conference onComputer Vision (Cham: Springer, 2014), 194–209. [19] N. Wang and D. Yeung, Learning a deep compact image representation for visual tracking, Advances in Neural InformationProcessing Systems, Lake Tahoe, 2013, 809–817. [20] S. Hong, T. You, S. Kwak, et al., Online tracking by learningdiscriminative saliency map with convolutional neural network,International Conference on Machine Learning (ICML), Lille,2015, 597–606. [21] C. Ma, J. B. Huang, X. Yang, et al., Hierarchical convolutionalfeatures for visual tracking, Proceedings of the IEEE Int. Conf.on Computer Vision, Santiago, 2015, 3074–3082. [22] L. Wang, H. Lu, X. Ruan, et al., Deep networks for saliencydetection via local estimation and global search, Proc. of theIEEE Conf. on Computer Vision and Pattern Recognition,Boston, 2015, 3183–3192. [23] H. Grabner, C. Leistner, and H. Bischof, Semi-supervised online boosting for robust tracking, European Conf. on ComputerVision (Springer Berlin Heidelberg, 2008), 234–247. [24] S. Avidan, Ensemble tracking, IEEE Transactions on PatternAnalysis and Machine Intelligence, 29(2), 2007, 261–271. [25] S. Hare, A. Saffari, and P. Torr, Struck: Structured outputtracking with kernels, 2011 IEEE International Conf. onComputer Vision, Barcelona, 2011, 263–270. [26] B. Babenko, M. H. Yang, and S. Belongie, Robust objecttracking with online multiple instance learning, IEEE Trans-actions on Pattern Analysis and Machine Intelligence, 33(8),2011, 1619–1632. [27] W. Huang, J. Gu, and X. Ma, Compressive sensing withweighted local classifiers for robot visual tracking, InternationalJournal of Robotics and Automation, 31(5), 2016, 416–427. [28] K. Zhang, L. Zhang, Q. Liu, et al., Fast visual tracking viadense spatio-temporal context learning, European Conf. onComputer Vision (Cham: Springer, 2014), 127–141. [29] J. F. Henriques, R. Caseiro, P. Martins, et al., High-speedtracking with kernelized correlation filters, IEEE Transactionson Pattern Analysis and Machine Intelligence, 37(3), 2015,583–596. [30] J. Satake and J. Miura, Robust stereo-based person detectionand tracking for a person following robot, ICRA Workshop onPeople Detection and Tracking, Kobe, 2009, 1–10. [31] S. Jia, X. Xuan, T. Xu, et al., Target tracking for mobile robotbased on spatio-temporal context model, 2015 IEEE Int. Conf.on Robotics and Biomimetics (ROBIO), 2015, 976–981. [32] https://en.wikipedia.org/wiki/Convolution_theorem [33] J. W. Cooley and J. W. Tukey, An algorithm for the ma-chine calculation of complex Fourier series, Mathematics ofComputation, 19(90), 1965, 297–301. [34] N. Dalal and B. Triggs, Histograms of oriented gradientsfor human detection, 2005 IEEE Computer Society Conf. onComputer Vision and Pattern Recognition (CVPR’05), Vol. 1,San Diego, 2005, 886–893. [35] A. Visioli, Practical PID control (London: Springer Science &Business Media, 2006) [36] A. Whitbrook, Programming mobile robots with aria and player:A guide to C++ object-oriented control (London: SpringerScience & Business Media, 2009). [37] X. Jia, H. Lu, and M. H. Yang, Visual tracking via adaptivestructural local sparse appearance model, 2012 IEEE Conf. onComputer Vision and Pattern Recognition (CVPR’12), RhodeIsland, 2012, 1822–1829. [38] Z. Kalal, J. Matas, and K. Mikolajczyk, P-n learning: Boot-strapping binary classifier by structural constraints, 2010IEEE Conf. on Computer Vision and Pattern Recognition(CVPR’10), San Francisco, 2010, 49–56. [39] W. Zhong, H. Lu, and M. H. Yang, Robust object trackingvia sparsity-based collaborative model, 2012 IEEE Conf. onComputer Vision and Pattern Recognition(CVPR’12), RhodeIsland, 2012, 1838–1845. [40] T. B. Dinh, N. Vo, and G. Medioni, Context tracker: Exploringsupporters and distracters in unconstrained environments, 2011IEEE Conf. on Computer Vision and Pattern Recognition(CVPR’11), Colorado Springs, 2011, 1177–1184. [41] J. Kwon and K. M. Lee, Tracking by sampling trackers, 2011IEEE International Conf. on Computer Vision, Barcelona,2011, 1195–1202. [42] J. Kwon and K. M. Lee, Visual tracking decomposition, 2010IEEE Conf. on Computer Vision and Pattern Recognition(CVPR’10), San Francisco, 2010, 1269–1276. [43] B. Liu, J. Huang, L. Yang, and C. A. Kulikowski, Robusttracking using local sparse appearance model and K-selection,2011 IEEE Conf. on Computer Vision and Pattern Recognition(CVPR’11), Colorado Springs, 2011, 1313–1320. [44] K. Li, X. Zhao, Z. Sun, and M. Tan, Robust target detection,tracking and following for an indoor mobile robot, 2017 IEEEConf. on Robotics and Biomimetics (ROBIO), Macao, 2017,593–598.
Important Links:
Go Back