Jianxi Yang, Chaoshun Yu, Shixin Jiang, Di Wang, and Hao Li
[1] Z. Zhu, W. Zou, and F. Zhang, Scalable and occlusion-awaremulti-cues correlation filter for robust stereo visual tracking,International Journal of Robotics and Automation, 34(5), 2019,477–490. [2] K. Li, X. Zhao, S. Sun, and M. Tan, Robust target tracking andfollowing for a mobile robot, International Journal of Roboticsand Automation, 33(4), 2018, 326–337. [3] Y. Xu, P. He, H. Wang, T. Dong, and P. Shao, Vehicle typedetection based on RetinaNet with adaptive learning rateattenuation, International Journal of Robotics and Automation,36, 2021, 1–8. [4] Y. Feng, T. Tang, S. Chen, and Y. Wu, Automated defectdetection based on transfer learning and deep convolutiongenerative adversarial networks, International Journal ofRobotics and Automation, 36(6), 2021, 471–478. [5] J. Chen, H. Sheng, Y. Zhang, and Z. Xiong, Enhancingdetection model for multiple hypothesis tracking, Proc. IEEEConf. on Computer Vision and Pattern Recognition Workshops,Honolulu, HI, 2017, 18–27. [6] L. Chen, H. Ai, Z. Zhuang, and C. Shang, Real-time multiplepeople tracking with deeply learned candidate selection andperson re-identification, Proc. IEEE Int. Conf. on Multimediaand Expo, San Diego, CA, 2018, 1–6. [7] P. Bergmann, T. Meinhardt, and L. Leal-Taix´e, Trackingwithout bells and whistles, Proc. IEEE/CVF Int. Conf. onComputer Vision, Seoul, 2019, 941–951. [8] F. Yu, W. Li, Q. Li, Y. Liu, X. Shi, and J. Yan, POI:Multiple object tracking with high performance detection andappearance feature, Proc. European Conf. on Computer Vision,Amsterdam, 2016, 36–42. [9] S. Sun, N. Akhtar, H. Song, A. Mian, and M. Shah, Deep affinitynetwork for multiple object tracking, IEEE Transactions onPattern Analysis and Machine Intelligence, 43(1), 2019, 104–119. [10] J. Li, B. Dai, X. Li, R. Wang, X. Xu, B. Jiang, and Y. Di,An interaction-aware predictive motion planner for unmannedground vehicles in dynamic street scenarios, InternationalJournal of Robotics and Automation, 34(3), 2019, 203–215. [11] F. Ding, J. Wang, J. Ge, and W. Li, Anomaly detectionin large-scale trajectories using hybrid grid-based hierarchicalclustering, International Journal of Robotics and Automation,33(5), 2018, 474–480. [12] F. Lin, R. Jiang, D. Shi, D. Ren, M. Lv, and T. Chen, Arisk control framework for mortgaged cars based on trajectorymining, International Journal of Robotics and Automation,35(6), 2020, 469–477 [13] T. Xiao, S. Li, B. Wang, L. Lin, and X. Wang, Joint detectionand identification feature learning for person search, Proc.IEEE Conf. on Computer Vision and Pattern Recognition,Honolulu, HI, 2017, 3415–3424. [14] A. Zhu and Y. Chen, A machine-learning-based algorithm fordetecting a moving object, International Journal of Roboticsand Automation, 31(5), 2016, 402–408. [15] Q. Chu, W. Ouyang, H. Li, X. Wang, B. Liu, and N. Yu, Onlinemulti-object tracking using CNN-based single object trackerwith spatial-temporal attention mechanism, Proc. IEEE Int.Conf. on Computer Vision, Venice, 2017, 4836–4845. [16] E. Bochinski, V. Eiselein, and T. Sikora, High-speed tracking-by-detection without using image information, Proc. 14th IEEEInt. Conf. on Advanced Video and Signal Based Surveillance,Lecce, 2017, 1–6. [17] W. Huang, J. Gu, and X. Ma, Compressive sensing withweighted local classifiers for robot visual tracking, InternationalJournal of Robotics and Automation, 31(5), 2016, 416–427. [18] P. Chu and H. Ling, FAMNet: Joint learning of feature, affinityand multi-dimensional assignment for online multiple objecttracking, Proc. IEEE/CVF Int. Conf. on Computer Vision,Seoul, 2019, 6172–6181. [19] P. Voigtlaender, M. Krause, A. Osep, J. Luiten, B.B.G. Sekar, A.Geiger, et al., MOTS: Multi-object tracking and segmentation,Proc. IEEE/CVF Conf. on Computer Vision and PatternRecognition, Long Beach, CA, 2019, 7942–7951. [20] K. He, G. Gkioxari, P. Doll´ar, and R. Girshick, Mask R-CNN,Proc. IEEE Int. Conf. on Computer Vision, Venice, 2017,2961–2969. [21] Y. Zhang, C. Wang, X. Wang, W. Zeng, and W. Liu, FairMOT:On the fairness of detection and re-identification in multipleobject tracking, arXiv preprint arXiv:2004.01888, 2020. [22] X. Zhou, D. Wang, and P. Kr¨ahenb¨uhl, Objects as points,arXiv preprint arXiv:1904.07850, 2019. [23] B. Pang, Y. Li, Y. Zhang, M. Li, and C. Lu, TubeTK: Adoptingtubes to track multi-object in a one-step training model,Proc. IEEE/CVF Conf. on Computer Vision and PatternRecognition, Seattle, WA, 2020, 6308–6318. [24] Y. Xu, A. Osep, Y. Ban, R. Horaud, L. Leal-Taix´e, and X.Alameda-Pineda, How to train your deep multi-object tracker,Proc. IEEE/CVF Conf. on Computer Vision and PatternRecognition, Seattle, WA, 2020, 6787–6796. [25] J. Peng, C. Wang, F. Wan, Y. Wu, Y. Wang, Y. Tai, et al.,Chained-tracker: Chaining paired attentive regression results314for end-to-end joint multiple-object detection and tracking,Proc. European Conf. on Computer Vision, Glasgow, 2020,145–161. [26] X. Zhou, V. Koltun, and P. Kr¨ahenb¨uhl, Tracking objects aspoints, Proc. European Conf. on Computer Vision, Glasgow,2020,474–490. [27] Q. Wang, B. Wu, P. Zhu, P. Li, W. Zuo, and Q. Hu, ECA-Net: Efficient channel attention for deep convolutional neuralnetworks, Proc. IEEE/CVF Conf. on Computer Vision andPattern Recognition, Seattle, WA, 2020. [28] F. Yu, D. Wang, E. Shelhamer, and T. Darrell, Deep layeraggregation, Proc. IEEE Conf. on Computer Vision andPattern Recognition, Salt Lake City, UT, 2018, 2403–2412. [29] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Doll´ar, Focalloss for dense object detection, Proc. IEEE Int. Conf. onComputer Vision, Venice, 2017, 2980–2988. [30] A. Kendall, Y. Gal, and R. Cipolla, Multi-task learningusing uncertainty to weigh losses for scene geometry andsemantics, Proc. IEEE Conf. on Computer Vision and PatternRecognition, Salt Lake City, UT, 2018, 7482–7491. [31] A. Ess, B. Leibe, K. Schindler, and L. Van Gool, A mobile visionsystem for robust multi-person tracking, Proc. IEEE Conf. onComputer Vision and Pattern Recognition, Anchorage, AK,2008, 1–8. [32] S. Zhang, R. Benenson, and B. Schiele, CityPersons: Adiverse dataset for pedestrian detection, Proc. IEEE Conf.on Computer Vision and Pattern Recognition, Venice, 2017,3213–3221. [33] L. Zheng, H. Zhang, S. Sun, M. Chandraker, Y. Yang, and Q.Tian, Person re-identification in the wild, Proc. IEEE Conf.on Computer Vision and Pattern Recognition, Honolulu, HI,2017, 1367–1376. [34] A. Milan, L. Leal-Taix´e, I. Reid, S. Roth, and K. Schindler,MOT16: A benchmark for multi-object tracking, arXiv preprintarXiv:1603.00831, 2016. [35] P. Dendorfer, H. Rezatofighi, A. Milan, J. Shi, D. Cremers, I.Reid, et al., MOT20: A benchmark for multi object trackingin crowded scenes, arXiv preprint arXiv:2003.09003, 2020. [36] K. Bernardin and R. Stiefelhagen, Evaluating multiple objecttracking performance: The CLEAR MOT metrics, EURASIPJournal on Image and Video Processing, 2008, 2008, 1–10. [37] E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi,Performance measures and a data set for multi-target, multi-camera tracking, Proc. European Conf. on Computer Vision,Amsterdam, 2016, 17–35. [38] S. Shao, Z. Zhao, B. Li, T. Xiao, G. Yu, X. Zhang, et al.,CrowdHuman: A benchmark for detecting human in a crowd,arXiv preprint arXiv:1805.00123, 2018. [39] J. Xiang, G. Xu, C. Ma, and J. Hou, End-to-end learning deepCRF models for multi-object tracking deep CRF models, IEEETransactions on Circuits and Systems for Video Technology,31(1), 2020, 275–288. [40] A. Hornakova, T. Kaiser, P. Swoboda, M. Rolinek, B.Rosenhahn, and R. Henschel, Making higher order MOTscalable: An efficient approximate solver for lifted disjoint paths,Proc. IEEE/CVF Int. Conf. on Computer Vision, Montreal,QC, 2021, 6330–6340. [41] Y. He, X. Wei, X. Hong, W. Ke, and Y. Gong, Identity-quantityharmonic multi-object tracking, IEEE Transactions on ImageProcessing, 31, 2022, 2201–2215. [42] Y. Zhang, H. Sheng, Y. Wu, S. Wang, W. Ke, and Z. Xiong,Multiplex labeling graph for near-online tracking in crowdedscenes, IEEE Internet of Things Journal, 7(9), 2020, 7892–7902. [43] W. Ren, X. Wang, J. Tian, Y. Tang, and A.B. Chan, Tracking-by-counting: Using network flows on crowd density mapsfor tracking multiple targets, IEEE Transactions on ImageProcessing, 30, 2020, 1439–1452. [44] G. Wang, Y. Wang, R. Gu, W. Hu, and J.-N. Hwang, Split andconnect: A universal tracklet booster for multi-object tracking,IEEE Transactions on Multimedia, 2022, Early Access. [45] P. Dai, R. Weng, W. Choi, C. Zhang, Z. He, and W. Ding,Learning a proposal classifier for multiple object tracking,Proc. IEEE/CVF Conf. on Computer Vision and PatternRecognition, Nashville, TN, 2021, 2443–2452.
Important Links:
Go Back