EXTENDED DEEP DOWNSAMPLING NETWORK WITH MULTI-INTERACTIVE REFINEMENT FOR UNDERWATER SALIENT OBJECT DETECTION

Weiliang Huang, and Daqi Zhu

References

  1. [1] H. Imani, M. B. Islam, and L.-K. Wong, “Saliency-aware stereo-scopic video retargeting,” in Proceedings of the IEEE/CVFConference on Computer Vision and Pattern Recognition, 2023.
  2. [2] A. Siris et al., “Scene context-aware salient object detection,”in Proceedings of the IEEE/CVF International Conference onComputer Vision, 2021.
  3. [3] H. Abidi et al., “Saliency based robust features for global visualservoing,” International Journal of Robotics and Automation,vol. 31, no. 5, pp. 206–214, 2016.
  4. [4] S. Wang, J. Chang, Z. Wang, et al., “Content-aware rectified ac-tivation for zero-shot fine-grained image retrieval,” IEEE Trans-actions on Pattern Analysis and Machine Intelligence, vol. 46,no. 6, pp. 4366–4380, 2024.
  5. [5] H. Guo et al., “In-context matting,” in Proceedings of theIEEE/CVF Conference on Computer Vision and PatternRecognition, 2024.
  6. [6] H. Chen, F. Shen, D. Ding, Y. Deng, and C. Li, “Disentan-gled cross-modal transformer for rgb-d salient object detectionand beyond,” IEEE Transactions on Image Processing, vol. 33,pp. 1699–1709, 2024.
  7. [7] D. Semani, M. Chambah, and P. Courtellemont, “Processing ofunderwater colour images applied to live aquarium videos,” In-ternational Journal of Robotics and Automation, vol. 20, no. 2,pp. 123–130, 2005.
  8. [8] N. Kumar et al., “Saliency subtraction inspired automated eventdetection in underwater environments,” Cognitive Computation,vol. 12, no. 1, pp. 115–127, 2020.
  9. [9] Q. Wang, Y. Zhang, and B. He, “Intelligent marine sur-vey: Lightweight multi-scale attention adaptive segmentationframework for underwater target detection of auv,” IEEETransactions on Automation Science and Engineering, vol. 22,pp. 1913–1927, 2025.
  10. [10] Z. Chen et al., “Underwater salient object detection by com-bining 2d and 3d visual features,” Neurocomputing, vol. 391,pp. 249–259, 2020.
  11. [11] M. Yan et al., “A novel segmentation method based on grayscalewave for underwater images,” International Journal of Roboticsand Automation, vol. 33, no. 4, 2018.
  12. [12] L. Hong et al., “Usod10k: a new benchmark dataset for under-water salient object detection,” IEEE Transactions on ImageProcessing, 2023.
  13. [13] B. Guo, H. Dai, and Z. Li, “Visual saliency-based motion de-tection technique for mobile robots,” International Journal ofRobotics and Automation, vol. 32, no. 2, 2017.
  14. [14] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional net-works for semantic segmentation,” in Proceedings of the IEEEConference on Computer Vision and Pattern Recognition, 2015.
  15. [15] J.-X. Zhao et al., “Egnet: Edge guidance network for salient ob-ject detection,” in Proceedings of the IEEE/CVF InternationalConference on Computer Vision, 2019.
  16. [16] C. Du and P. X. Liu, “A real-time mri tumour segmenta-tion method based on lightweight network for imaging roboticsystems,” International Journal of Robotics and Automation,vol. 39, no. 3, 2024.
  17. [17] S. Ren et al., “Cultivated land segmentation of remote sensingimage based on pspnet of attention mechanism,” InternationalJournal of Robotics and Automation, vol. 37, no. 1, pp. 11–19,2022.
  18. [18] Y.-H. Wu et al., “Edn: Salient object detection via extremely-downsampled network,” IEEE Transactions on Image Process-ing, vol. 31, pp. 3125–3136, 2022.
  19. [19] J.-J. Liu et al., “Poolnet+: Exploring the potential of pool-ing for salient object detection,” IEEE Transactions on PatternAnalysis and Machine Intelligence, vol. 45, no. 1, pp. 887–904,2022.
  20. [20] M. Ma, C. Xia, and J. Li, “Pyramidal feature shrinking forsalient object detection,” in Proceedings of the AAAI Confer-ence on Artificial Intelligence, 2021.
  21. [21] T. Zhao and X. Wu, “Pyramid feature attention network forsaliency detection,” in Proceedings of the IEEE/CVF Confer-ence on Computer Vision and Pattern Recognition, 2019.
  22. [22] Z. Luo et al., “Non-local deep features for salient object de-tection,” in Proceedings of the IEEE Conference on ComputerVision and Pattern Recognition, 2017.
  23. [23] Z. Deng et al., “R3net: Recurrent residual refinement networkfor saliency detection,” in Proceedings of the 27th InternationalJoint Conference on Artificial Intelligence, (Menlo Park, CA,USA), AAAI Press, 2018.
  24. [24] X. Hu et al., “Recurrently aggregating deep features for salientobject detection,” in Proceedings of the AAAI Conference onArtificial Intelligence, 2018.
  25. [25] T. Wang et al., “A stagewise refinement model for detectingsalient objects in images,” in Proceedings of the IEEE Interna-tional Conference on Computer Vision, 2017.
  26. [26] M. A. Islam et al., “Salient object detection using a context-aware refinement network,” in Proceedings of the British Ma-chine Vision Conference (BMVC), 2017.
  27. [27] M. J. Islam, R. Wang, and J. Sattar, “Svam: Saliency-guidedvisual attention modeling by autonomous underwater robots,”arXiv preprint arXiv:2011.06252, 2020.
  28. [28] R. Chen et al., “A robust object segmentation network for un-derwater scenes,” in ICASSP 2022 - IEEE International Con-ference on Acoustics, Speech and Signal Processing (ICASSP),IEEE, 2022.
  29. [29] Z. Zheng et al., “Coralscop: Segment any coral image on thisplanet,” in Proceedings of the IEEE/CVF Conference on Com-puter Vision and Pattern Recognition, 2024.
  30. [30] T. Yan et al., “Mas-sam: Segment any marine animal with ag-gregated features,” arXiv preprint arXiv:2404.15700, 2024.
  31. [31] J. Jin et al., “Underwater salient object detection via dual-stageself-paced learning and depth emphasis,” IEEE Transactions onCircuits and Systems for Video Technology, 2024.
  32. [32] G. Yuan, J. Song, and J. Li, “If-usod: Multimodal informationfusion interactive feature enhancement architecture for under-water salient object detection,” Information Fusion, vol. 117,p. 102806, 2025.
  33. [33] Y. Pang et al., “Multi-scale interactive network for salient ob-ject detection,” in Proceedings of the IEEE/CVF Conferenceon Computer Vision and Pattern Recognition, 2020.
  34. [34] P. Zhang et al., “Amulet: Aggregating multi-level convolutionalfeatures for salient object detection,” in Proceedings of the IEEEInternational Conference on Computer Vision, 2017.
  35. [35] P.-T. D. Boer et al., “A tutorial on the cross-entropy method,”Annals of Operations Research, vol. 134, pp. 19–67, 2005.
  36. [36] Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale struc-tural similarity for image quality assessment,” in The Thirty-Seventh Asilomar Conference on Signals, Systems & Comput-ers, (Pacific Grove, CA, USA), pp. 1398–1402 Vol.2, 2003.
  37. [37] G. M´attyus, W. Luo, and R. Urtasun, “Deeproadmapper: Ex-tracting road topology from aerial images,” in Proceedings ofthe IEEE International Conference on Computer Vision, 2017.
  38. [38] L. Wang et al., “Learning to detect salient objects with image-level supervision,” in Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition, 2017.
  39. [39] M. J. Islam, P. Luo, and J. Sattar, “Simultaneous enhancementand super-resolution of underwater imagery for improved visualperception,” arXiv preprint arXiv:2002.01155, 2020.
  40. [40] M. J. Islam et al., “Semantic segmentation of underwater im-agery: Dataset and benchmark,” in 2020 IEEE/RSJ Interna-tional Conference on Intelligent Robots and Systems (IROS),IEEE, 2020.293
  41. [41] X. Qin et al., “Basnet: Boundary-aware salient object detec-tion,” in Proceedings of the IEEE/CVF Conference on Com-puter Vision and Pattern Recognition, 2019.
  42. [42] Z. Wu, L. Su, and Q. Huang, “Cascaded partial decoder forfast and accurate salient object detection,” in Proceedings ofthe IEEE/CVF Conference on Computer Vision and PatternRecognition, 2019.
  43. [43] Z. Zhao et al., “Complementary trilateral decoder for fast andaccurate salient object detection,” in Proceedings of the 29thACM International Conference on Multimedia, 2021.
  44. [44] N. Liu and J. Han, “Dhsnet: Deep hierarchical saliency networkfor salient object detection,” in Proceedings of the IEEE Con-ference on Computer Vision and Pattern Recognition, 2016.
  45. [45] X. Zhao et al., “Suppress and balance: A simple gated networkfor salient object detection,” in Computer Vision – ECCV 2020:16th European Conference, Glasgow, UK, August 23–28, 2020,Proceedings, Part II, Springer, 2020.
  46. [46] Z. Chen et al., “Global context-aware progressive aggregationnetwork for salient object detection,” in Proceedings of theAAAI Conference on Artificial Intelligence, 2020.
  47. [47] J. Wei et al., “Label decoupling framework for salient object de-tection,” in Proceedings of the IEEE/CVF Conference on Com-puter Vision and Pattern Recognition, 2020.
  48. [48] N. Liu, J. Han, and M.-H. Yang, “Picanet: Learning pixel-wisecontextual attention for saliency detection,” in Proceedings ofthe IEEE Conference on Computer Vision and Pattern Recog-nition, 2018.
  49. [49] J.-J. Liu et al., “A simple pooling-based design for real-timesalient object detection,” in Proceedings of the IEEE/CVF Con-ference on Computer Vision and Pattern Recognition, 2019.
  50. [50] Z. Wu, L. Su, and Q. Huang, “Stacked cross refinement net-work for edge-aware salient object detection,” in Proceedings ofthe IEEE/CVF International Conference on Computer Vision,2019.
  51. [51] Y. Wang et al., “Pixels, regions, and objects: Multiple en-hancement for salient object detection,” in Proceedings ofthe IEEE/CVF Conference on Computer Vision and PatternRecognition, 2023.
  52. [52] Z. Yao and W. Gao, “Iterative saliency aggregation and assign-ment network for efficient salient object detection in optical re-mote sensing images,” IEEE Transactions on Geoscience andRemote Sensing, vol. 62, pp. 1–13, 2024.
  53. [53] G. Zhu, J. Li, and Y. Guo, “Separate first, then segment: An in-tegrity segmentation network for salient object detection,” Pat-tern Recognition, vol. 150, p. 110328, 2024.
  54. [54] B. Liang and H. Luo, “Meanet: An effective and lightweightsolution for salient object detection in optical remote sensingimages,” Expert Systems with Applications, vol. 238, p. 121778,2024.

Important Links:

Go Back