MULTI-OBJECT GRASPING DETECTION BASED ON THE IMPROVED SHUFFLENET NETWORK

Yang Jiang, Xuejiao Zhang, and Bin Zhao

References

  1. [1] I.L. Longyan and U.G. Shaokui, Research status of robotgrab detection based on vision, Asian Journal of Research inComputer Science, 14(4), 2022, 21–35.
  2. [2] U.J. Fern´andez, S. Elizondo, N. Iriarte, R. Morales, A. Ortiz,S. Marichal, O. Ardaiz, and A. Marzo, A multi-object grasptechnique for placement of objects in virtual reality, AppliedSciences, 12(9), 2022, 1–11.
  3. [3] X. Zhou, X. Lan, H. Zhang, Z. Tian, Y. Zhang, and N.Zheng, Fully convolutional grasp detection network withoriented anchor box, Proc. 2018 IEEE/RSJ International Conf.on Intelligent Robots and Systems (IROS), Madrid, 2018,7223–7230.
  4. [4] L. Yi, Application of deep learning in multi-object identificationof intelligent small cars, Modern Industrial Economy andInformation Technology, 12(05), 2022, 24–25+58.
  5. [5] C.-j. Li, Z. Qu, W. Sheng-ye, and L. Liu, A method of cross-layer fusion multi-object detection and recognition based onimproved faster R-CNN model in complex traffic environment,Pattern Recognition Letters, 145, 2021, 127–134.
  6. [6] F. Mei, X. Gao, S. Deng, and W. Li, Target recognition andgrabbing positioning method based on convolutional neuralnetwork, Mathematical Problems in Engineering, 2022, 2022,1–11.
  7. [7] X. Yuan, H. Yu, H. Zhang, L. Zheng, E. Dong, and H. Wu,A multi-scale grasp detector based on fully matching model,Computer Modeling in Engineering & Sciences, 133(2), 2022,281–301.
  8. [8] M. Hu, Z. Li, J. Yu, X. Wan, H. Tan, and Z. Lin,Efficient-lightweight YOLO: improving small object detectionin YOLO for aerial images, Sensors (Basel, Switzerland),23(14), 2023, 6423
  9. [9] W. Xie, M. Cui, M. Liu, P. Wang, and B. Qiang, Deephashing multi-label image retrieval with attention mechanism,International Journal of Robotics and Automation, 37(4), 2022,372–381.
  10. [10] X. Zhang, X. Zhou, M. Lin, and J. Sun, ShuffleNet: Anextremely efficient convolutional neural network for mobiledevices, 2017, arXiv:1707.01083.
  11. [11] X. Huihui and L. Fei, A multiscale dilated convolutionand mixed-order attention-based deep neural network formonocular depth prediction, SN Applied Sciences, 5(1), 2022,1–14.
  12. [12] M.-H. Park, J.-H. Cho, and Y.-T. Kim, CNN model withmultilayer ASPP and two-step cross-stage for semanticsegmentation, Machines, 11(2), 2023, 126.
  13. [13] Y. Wang, J. Li, Z. Chen, and C. Wang, Ships’ smalltarget detection based on the CBAM-YOLOX algorithm,Journal of Marine Science and Engineering, 10(12), 2022,2013.
  14. [14] D. Hang, J. Yang, S. Jiang, H. Li, X. Zou, C. Tang, and D. Liu,Lightweight mesh crack detection algorithm based on efficientattention mechanism, International Journal of Robotics andAutomation, 2023, 170–179.
  15. [15] Z. Huang, S. Zou, G. Wang, Z. Chen, H. Shen, H. Wang, N.Zhang, L. Zhang, F. Yang, H. Wang, D. Liang, T. Niu, X.Zhu, and Z. Hu, ISA-Net: Improved spatial attention networkfor PET-CT tumor segmentation, Computer Methods andPrograms in Biomedicine, 226, 2022, 107129.
  16. [16] B. Du, Y. Huang, J. Chen, and D. Huang, Adaptive sparse con-volutional networks with global context enhancement for fasterobject detection on drone images, 2023, arXiv:2303.14488.
  17. [17] Z. Chong, H. Zhuhua, X. Lewei, and Z. Yaochi, AYOLOv7 incorporating the Adan optimizer based corn pestsidentification method, Frontiers in Plant Science, 14, 2023,1174556.
  18. [18] S. Yaoxian, W. Jun, L. Dongfang, and Y. Changbin, Deeprobotic grasping prediction with hierarchical RGB-D fusion,International Journal of Control, Automation and Systems,20(1), 2022, 243–254.
  19. [19] R. Zhang, B. Zhou, C. Lu, and M. Ma, The performanceresearch of the data augmentation method for image classi-fication, Mathematical Problems in Engineering, 2022, 2022,1–10.
  20. [20] L. Chauvin, K. Kumar, C. Desrosiers, W. Wells, and M. Toews,Efficient pairwise neuroimage analysis using the soft Jaccardindex and 3D keypoint sets, IEEE Transactions on MedicalImaging, 41, 2021, 836–845.9

Important Links:

Go Back