TWO-STAGE FRAME MATCHING IN VSLAM BASED ON FEATURE EXTRACTION WITH ADAPTIVE THRESHOLD FOR INDOOR TEXTURE-LESS AND STRUCTURE-LESS, 1-7. SI

Ding Wang, Jing Wang, and Huan Wang

References

  1. [1] B. Han and L. Xu, MLC-SLAM: Mask loop closing formonocular SLAM, International Journal of Robotics andAutomation, 37(1), 2022, 107–114.
  2. [2] S. Badalkhani, R. Havangi, and M. Farshad, An improvedsimultaneous location and mapping for environments, Inter-national Journal of Robotics and Automation, 36(6), 2021,374–382.
  3. [3] S. Wen, Z. Wang, J. Chen, L. Manfredi, and Y. Tong, CSLAMsystem consensus estimation in dynamic communicationnetworks, International Journal of Robotics and Automation,37(2), 2022, 227–235.
  4. [4] Z. Qiguang, P. Yingchun, Y. Mei, and C. Weidong, CEH∞F-SLAM: A robust and jacobian-free solution to SLAM problem,International Journal of Robotics and Automation, 34(1), 2019,808–820.
  5. [5] J. Ni, Y. Chen, K. Wang, and S.X. Yang, An improved vision-based SLAM approach inspired from animal spatial cognition,International Journal of Robotics and Automation, 34(5), 2019,491–502.
  6. [6] R.G. von Gioi, J. Jakubowicz, J. Morel, and G. Randall, LSD: Afast line segment detector with a false detection control, IEEETransactions on Pattern Analysis and Machine Intelligence,32(4), 2010, 722–732.
  7. [7] D.Van Opdenbosch, and E. Steinbach, Collaborative visualSLAM using compressed feature exchange, IEEE Robotics andAutomation Letters, 4(1), 2019, 57–64.
  8. [8] R.A. Newcombe, S. Izadi, and O. Hilliges, KinectFusion: Real-time dense surface mapping and tracking, Proc. 2011 10th IEEEInternational Symposium on Mixed and Augmented Reality,Basel, 2011, 127–136.
  9. [9] X. Zhu, Q. Cao, and Y. Yang, An improved kinectFusion 3Dreconstruction algorithm, Robot, 36(2), 2014, 129–136.
  10. [10] H. Zhou, D. Zou, and L. Pei, StructSLAM: Visual SLAMwith building structure lines, IEEE Transactions on VehicularTechnology, 64(4), 2015, 1364–1375.
  11. [11] A. Pumarola, A. Vakhitov, and A. Agudo, PL-SLAM: Real-time monocular visual SLAM with points and lines, Proc.2017 IEEE International Conf. on Robotics and Automation,Singapore, 2017, 4503–4508.
  12. [12] R. Gomez-Ojeda, F. Moreno, and D. Zu˜niga-No¨el, PL-SLAM:A stereo SLAM system through the combination of points andline segments, IEEE Transactions on Robotics, 35(3), 2019,734–746.
  13. [13] T. Lee, C. Kim, and D.D. Cho, A monocular vision sensor-based efficient SLAM method for indoor service robots, IEEETransactions on Industrial Electronics, 66(1), 2019, 318–328.
  14. [14] Q. Fu, H. Yu, and L. Lai, A robust RGB-D SLAM system withpoints and lines for low texture indoor environments, IEEESensors Journal, 19(21), 2019, 9908–9920.
  15. [15] B. Triggs, P.F. Mclauchlan, and R.I. Hartley, Bundleadjustment-a modern synthesis, Proc. International Workshopon Vision Algorithms: Theory and Practice, Berlin, Heidelberg,1999, 298–372.
  16. [16] K. Liu, H. Sun, and P. Ye, Research on bundle adjustmentfor visual SLAM under large-scale scene, Proc. 2017 4thInternational Conf. on Systems and Informatics, Hangzhou,2017, 220–224.
  17. [17] A. Eudes, M. Lhuillier, and S. Naudet-Collette, Fast odometryintegration in local bundle adjustment-based visual SLAM,Proc. 20th International Conf. on Pattern Recognition,Istanbul, 2010, 290–293.
  18. [18] P. Doll´ar, R. Appel, and S. Belongie, Fast feature pyramidsfor object detection, IEEE Transactions on Pattern Analysisand Machine Intelligence, 36(8), 2014, 1532–1545.
  19. [19] N.K. Ratha, J.H. Connell, and R.M. Bolle, An analysisof minutiae matching strength, Proc. International Conf.on Audio-and Video-Based Biometric Person Authentication,Berlin, Heidelberg, 2001, 223–228.
  20. [20] M. Muja and D.G. Lowe. Fast matching of binary features, Proc.2012 Ninth Conf. on Computer and Robot Vision, Toronto,ON, 2012, 404–410.
  21. [21] S. Choi, T. Kim, and W. Yu, Performance evaluation ofRANSAC family, Journal of Computer Vision, 24(3), 1997,271–300.
  22. [22] W. Lin, M.M. Cheng, and J. Lu, Bilateral functions for globalmotion modeling, Proc. European Conf. on Computer Vision,Cham, 2014, 341–356.

Important Links:

Go Back