Zhenyu Liu, Jing Wang, and Fuli Zhang
[1] K. Joo, P. Kim, M. Hebert, I.S. Kweon, and H.J. Kim, LinearRGB-D SLAM for structured environments, IEEE Transactionson Pattern Analysis and Machine Intelligence, 44(11), 2021,8403–8419. [2] Y. Tian, Y. Chang, F.H. Arias, C. Nieto-Granda, J.P. How, andL.Carlone, Kimera-multi: Robust, distributed, dense metric-semantic slam for multi-robot systems, IEEE Transactions onRobotics, 38(4), 2022, 2022–2038. [3] Q. Jiang, Y. Liu, Y. Yan, P. Xu, L. Pei, and X. Jiang,Active Pose Relocalization for Intelligent Substation InspectionRobot, IEEE Transactions on Industrial Electronics, 70(5),2022, 4972–4982. [4] L. Dong, N. Chen, J. Liang, T. Li, Z. Yan, and B.A. Zhang,A review of indoor-orbital electrical inspection robots insubstations, Industrial Robot: The International Journal ofRobotics Research and Application, 50(2), 2022, 337–352. [5] C. Campos, R. Elvira, J.J.G. Rodr´ıguez, J.M. Montiel,and J.D. Tardos, ORB-SLAM3: An accurate open-sourcelibrary for visual, visual–inertial, and multimap SLAM, IEEETransactions on Robotics, 37(6), 2021, 1874–1890. [6] J. Ni, Y. Chen, K. Wang, and S.X. Yang, An improved vision-based slam approach inspired from animal spatial cognition,International Journal of Robotics and Automation, 34(5), 2019. [7] L. Von Stumberg and D. Cremers, Dm-vio: Delayedmarginalization visual-inertial odometry, IEEE Robotics andAutomation Letters, 7(2), 2022, 1408–1415. [8] R. Mur-Artal, J.M.M. Montiel, and J.D. Tardos, ORB-SLAM:A versatile and accurate monocular SLAM system, IEEETransactions on Robotics, 31(5), 2015, 1147–1163. [9] R. Mur-Artal and J.D. Tard´os, Orb-slam2: An open-sourceslam system for monocular, stereo, and RGB-D cameras, IEEETransactions on Robotics, 33(5), 2017, 1255–1262. [10] J. Ni, L. Wang, X. Wang, and G. Tang, An improvedvisual SLAM based on map point reliability under dynamicenvironments, Applied Sciences, 13(4), 2023, 2712. [11] H. Cao, J. Xu, D. Li, L. Shangguan, Y. Liu, and Z. Yang, Edgeassisted mobile semantic visual SLAM, IEEE Transactions onMobile Computing, 22(12), 2023, 6985–6999. [12] L.F. Shi, F. Zheng, and Y. Shi, Multi-constraint SLAMoptimisation algorithm for indoor scenes, International Journalof Robotics and Automation, 37(6), 2023, 375–382. [13] D. Wang, J. Wang, and H. Wang, Two-stage frame matchingin VSLAM based on feature extraction with adaptive thresholdfor indoor texture-less andstructure-less, International Journalof Robotics and Automation, 38(10), 2023, 1–7. [14] C. Cusano, P. Napoletano, and R. Schettini, Illuminantinvariant descriptors for color texture classification, Proc.Computational Color Imaging: 4th International Workshop,CCIW 2013, Chiba, Japan, 2013, 239–249. [15] A.J. Glover, W.P. Maddern, M.J. Milford, and G.F. Wyeth,FAB-MAP+ RatSLAM: Appearance-based SLAM for multipletimes of day, Proc. 2010 IEEE International Conferenceon Robotics and Automation, Anchorage, AK, USA, 2010,3507–3512. [16] R. Gomez-Ojeda, Z. Zhang, J. Gonzalez-Jimenez, and D.Scaramuzza, Learning-based image enhancement for visualodometry in challenging HDR environments, Proc. 2018 IEEEInternational Conference on Robotics and Automation (ICRA),Brisbane, QLD, Australia, 2018, 805–811. [17] J. Wang, W. Liang, J. Yang, S. Wang, Z.-X. Yang, An adaptiveimage enhancement approach for safety monitoring robot underinsufficient illumination condition, Computers in Industry, 147,2023, 103862. [18] R. Wu , M. Pike, and B.G. Lee. DT-SLAM: Dynamicthresholding based corner point extraction in SLAM system,IEEE Access, 9, 2021, 91723–91729. [19] S. Wang, A. Zhang, and Y. Li, Feature extraction algorithmbased on improved ORB with adaptive threshold, Proc. 2023IEEE International Conf. on Industrial Technology (ICIT),Orlando, FL, USA, 2023, 1–6. [20] X. Wu and C. Pradalier, Illumination robust monocular directvisual odometry for outdoor environment mapping, Proc. 2019International Conf. on Robotics and Automation (ICRA),Montreal, QC, Canada, 2019, 2392–2398. [21] M. Labb´e and F. Michaud, Multi-session visual SLAM forillumination-invariant relocalization in indoor environments,Frontiers in Robotics and AI, 9, 2022, 801886. [22] L. Von Stumberg, P. Wenzel, Q. Khan, and D. Cremers, Gn-net: The Gauss-Newton loss for multi-weather relocalization,IEEE Robotics and Automation Letters, 5(2), 2020, 890–897. [23] K. Xu, Y. Hao, C. Wang, and L. Xie, Airvo: An illumination-robust point-line visual odometry, 2022, arXiv: 2212.07595. [24] V. Balntas, S. Li, and V. Prisacariu, Relocnet: Continuousmetric learning relocalisation using neural nets, Proceedings ofthe European Conference on Computer Vision (ECCV), 2018,751–767. [25] N. Choi, J. Jang, and J. Paik, Illuminant-invariant stereomatching using cost volume and confidence-based disparityrefinement, JOSA A, 36(10), 2019, 1768–1776. [26] D. Zu˜niga-No¨el, A. Jaenal, Gomez-Ojeda R, and J. Gonzalez,The UMA-VI dataset: Visual–inertial odometry in low-texturedand dynamic illumination environments, The InternationalJournal of Robotics Research, 39(9), 2020, 1052–1060. [27] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D.Cremers, A benchmark for the evaluation of RGB-D SLAMsystems, Proc. 2012 IEEE/RSJ International Conference onIntelligent Robots and Systems, Vilamoura-Algarve, Portugal,2012, 573–580. [28] M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S.Omari, M. Achtelik, and R. Siegwart, The EuRoC microaerial vehicle datasets, The International Journal of RoboticsResearch, 35(10), 2016, 1157–1163.
Important Links:
Go Back