Shengnan Gao, Na Zhang , Yingying Liu, and Juan Pan
[1] C. Cheng, J. Fu, H. Su, and L. Ren, Recent advancements inagriculture robots: Benefits and challenges, Machines, 11(1),2023, 48. [2] T. Meng, X. Jing, Z. Yan, and W. Pedrycz, A survey onmachine learning for data fusion, Information Fusion, 57, 2020,115–129. [3] Q. Tang, J. Liang, and F. Zhu, A comparative review onmultimodal sensors fusion based on deep learning, SignalProcessing, 213, 2023, 109165. [4] K. Weerakoon, A. J. Sathyamoorthy, J. Liang, T. Guan, U.Patel, and D. Manocha, GrASPE: Graph-based multimodalfusion for robot navigation in outdoor environments, IEEERobotics and Automation Letters, 8(12), 2023, 8090–8097. [5] C.S. Parr, D.G. Lemay, C.L. Owen, M.J. Woodward-Greene,and J. Sun, Multimodal AI to improve agriculture, ITProfessional, 23(3), 2021, 53–57. [6] R. G¨uldenring, F.K. Van Evert, and L. Nalpantidis,RumexWeeds: A grassland dataset for agricultural robotics,Journal of Field Robotics, 40(6), 2023, 1639–1656. [7] J. Li, L. Wang, J. Liu, and J. Tang, ViST: A ubiquitousmodel with multimodal fusion for crop growth prediction, ACMTransactions on Sensor Networks, 20(1), 2023, 1–23. [8] U. Ahmad, A. Nasirahmadi, O. Hensel, and S. Marino,Technology and data fusion methods to enhance site-specificcrop monitoring, Agronomy, 12(3), 2022, 555. [9] M. Kragh and J. Underwood, Multimodal obstacle detectionin unstructured environments with conditional random fields,Journal of Field Robotics, 37(1), 2020, 53–72. [10] Y. Sun, Z. Fu, C. Sun, Y. Hu, and S. Zhang, Deep multimodalfusion network for semantic segmentation using remote sensingimage and LiDAR data, IEEE Transactions on Geoscienceand Remote Sensing, 60, 2021, 1–18. [11] J.U.M. Akbar, S.F. Kamarulzaman, A.J.M. Muzahid, M.A.Rahman, and M. Uddin, A comprehensive review on deeplearning assisted computer vision techniques for smartgreenhouse agriculture, IEEE Access, 12, 2024, 4485–4522. [12] C. Krahe, M. Kalaidov, M. Doellken, T. Gwosch, A. Kuhnle,G. Lanza, and S. Matthiesen, AI-Based knowledge extractionfor automatic design proposals using design-related patterns,Procedia CIRP, 100, 2021, 397–402. [13] K. Sriprateep, S. Khonjun, P. Golinska-Dawson, R. Pitakaso, P.Luesak, T. Srichok, S. Chiaranai, S. Gonwirat, and B. Buakum,Automated classification of agricultural species through parallelartificial multiple intelligence system–ensemble deep learning,Mathematics, 12(2), 2024, 351. [14] C.H. Tai, K.H. Chung, Y.W. Teng, F.M. Shu, and Y.S. Chang,Inference of mood state indices by using a multimodal high-level information fusion technique, IEEE Access, 9, 2021,61256–61268. [15] A.K.S. Saranya and T. Jaya, CIOT-based early diagnosis ofheart failure from multimodal data using chi-square-baseddeep neural classifier, International Journal of Robotics andAutomation, 39(6), 2024, 464–472. [16] T. Iwasaki, S. Arnold, and K. Yamazaki, Visual servoing invirtualised environments based on optical flow learning andconstrained optimization, International Journal of Roboticsand Automation, 38(10), 2023, 1–10. [17] Y. Jin, J. Liu, X. Wang, P. Li, and J. Wang, Technologyrecommendations for an innovative agricultural robot designbased on technology knowledge graphs, Processes, 9(11), 2021,1905. [18] R.R. Patil and S. Kumar, Rice-fusion: A multimodality datafusion framework for rice disease diagnosis, IEEE Access, 10,2022, 5207–5222. [19] Y. Jin, J. Liu, X. Wang, P. Li, and J. Wang, YOLACTFusion:An instance segmentation method for RGB-NIR multimodalimage fusion based on an attention mechanism, Computersand Electronics in Agriculture, 213, 2023, 108186. [20] T. Korthals, M. Kragh, P. Christiansen, H. Karstoft, R.N.Jørgensen, and U. R¨uckert, Multimodal detection and mappingof static and dynamic obstacles in agriculture for processevaluation, Frontiers in Robotics and AI, 5, 2018, 28. [21] A. Bender, B. Whelan, and S. Sukkarieh, A high-resolution,multimodal data set for agricultural robotics: A Ladybird’s-eye view of Brassica, Journal of Field Robotics, 37(1), 2020,73–96. [22] S. Cho, T. Kim, D.H. Jung, S.H. Park, Y. Na, Y.S. Ihn,and K.G. Kim, Plant growth information measurement basedon object detection and image fusion using a smart farmrobot, Computers and Electronics in Agriculture, 207, 2023,107703. [23] C. Mwitta, G.C. Rains, and E. Prostko, Evaluation of inferenceperformance of deep learning models for real-time weeddetection in an embedded computer, Sensors, 24(2), 2024,514. [24] J. Wang, J. Ding, S. Ran, S. Qin, B. Liu, and X. Li,Automatic pear extraction from high-resolution images by avisual attention mechanism network, Remote Sensing, 15(13),2023, 3283. [25] F. Wang, R.C. Urquizo, P. Roberts, V. Mohan, C. Newenham,A. Ivanov, and R. Dowling, Biologically inspired roboticperception-action for soft fruit harvesting in vertical growingenvironments, Precision Agriculture, 24(3), 2023, 1072–1096. [26] F. Kiani, A. Seyyedabbasi, S. Nematzadeh, F. Candan, T.C¸evik, F.A. Anka, G. Randazzo, S. Lanza, and A. Muzirafuti,Adaptive metaheuristic-based methods for autonomous robot12path planning: sustainable agricultural applications, AppliedSciences, 12(3), 2022, 943. [27] D.K. Jain, X. Zhao, G. Gonz´alez-Almagro, C. Gan, and K.Kotecha, Multimodal pedestrian detection using metaheuristicswith deep convolutional neural network in crowded scenes,Information Fusion, 95, 2023, 401–414. [28] F. Kiani, G. Randazzo, I. Yelmen, A. Seyyedabbasi, S.Nematzadeh, F.A. Anka, F. Erenel, M. Zontul, S. Lanza, and A.Muzirafuti, A smart and mechanized agricultural application:From cultivation to harvest, Applied Sciences, 12(12), 2022,6021. [29] A, Bender, B, Whelan, and S, Sukkarieh, Ladybird cobbitty2017 Brassica dataset, https://ses.library.usyd.edu.au/handle/2123/20187.
Important Links:
Go Back