DETECTION OF TRANSMISSION LINE AGAINST EXTERNAL FORCE DAMAGE BASED ON IMPROVED YOLOv3

Peng Liu, Changlin Song, Junmin Li, Simon X. Yang, Xingyu Chen, Chuanfu Liu, and Qiang Fu

References

  1. [1] P. Zhi, F. Hong, H. Wei, Discussion on the application of radar video surveillance in the prevention of external force destruction of transmission lines, Pioneering with Science & Technology Monthly, 29(19), 2016, 133–134.
  2. [2] Y. Lingping, Z. Hongyu, Y. Qiang, Z. Xuyong, D. Jianhua, and Y. Hui, Study on laser monitoring and early warning system for high voltage line protection against external damage, Electric Power & Energy, 35(2), 2014, 176–178.
  3. [3] H. Shuangde, X. Baoyu, Research and application of overhead ground line capacitance and microwave detection in out-of-break monitoring of transmission lines, Electric Technology, 19(10), 2018, 23–26.
  4. [4] T. Yue and Z. Shuyuan, Application of image processing in the prevention of external force damage prevention of transmission lines, Process Automation Instrumentation, 37(10), 2016, 43–45, 48.
  5. [5] T. Huijuan, L. Junmin, and S. Changlin, Detection method of external force damage prevention of transmission lines based on image processing, Modern Computer, (3), 2016, 41–43.
  6. [6] C. Li, X. Qu, Y. Yang, et al, High-resolution remote sensing image segmentation method based on SRelu, International Journal of Robotics and Automation, 34, 2019, 225–234.
  7. [7] C. Li, H. Gao, Y. Yang, et al., Segmentation method of high-resolution remote sensing image for fast target recognition, International Journal of Robotics and Automation, 34, 2019, 216–224. 466
  8. [8] G. Ascioglu and Y. Senol, Prediction of lower extremity joint angles using neural networks for exoskeleton robotic leg, International Journal of Robotics and Automation, 33, 2018, 141–149.
  9. [9] A. Malek, L. Jafarian-Khaled Abad, and S. Khodayari-Samghabad, Semi-infinite programming to solve armed robot trajectory problem using recurrent neural network, International Journal of Robotics and Automation, 3, 2015, 113–118.
  10. [10] R. Girshick, J. Donahue, T. Darrell, and J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, IEEE Conf. on Computer Vision and Pattern Recognition, Columbus, America, 2014, 580–587.
  11. [11] S. Ren, K. He, R. Girshick, and J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Transactions on Pattern Analysis & Machine Intelligence, 39(6), 2015, 1137–1149.
  12. [12] J. Redmon, S.K. Divvala, R.B. Girshick, and A. Farhadi, You only look once: Unified, real-time object detection, IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, America, 2016, 779–788.
  13. [13] J. Redmon, and A. Farhadi, YOLO9000: Better, faster, stronger, IEEE Conf. on Computer Vision and Pattern Recognition, Hawaii, America, 2017, 6517–6525.
  14. [14] J. Redmon and A. Farhadi, YOLOv3: An incremental improvement, arXiv:1804.02767, 2018.
  15. [15] Y. Chunjiang, W. Wei, F. Hualin, et al., Engineering vehicle intrusion detection in transmission lines based on deep learning, Information Technology, 42(7), 2018, 36–41.
  16. [16] D. Arthur and S. Vassilvitskii, k-means++: The advantages of careful seeding, Proceedings of the Eighteenth Annual ACMSIAM Symp. on Discrete algorithms, New Orleans, Louisiana, USA, 2007, 1027–1035.
  17. [17] F.J. Pulgar, A.J. Rivera, F. Charte, M.J. del Jesus, On the impact of imbalanced data in convolutional neural networks performance, Int. Conf. on Hybrid Artificial Intelligence Systems, Springer, Cham, 2017, 220–232.
  18. [18] K. He, X. Zhang, S. Ren, et al., Deep residual learning for image recognition, IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, America, 2016, 770–778.
  19. [19] T.Y. Lin, P. Dollar, R. Girshick, et al., Feature pyramid networks for object detection, IEEE Conf. on Computer Vision and Pattern Recognition, Hawaii, America, 2017, 936–944.
  20. [20] S. Liu, L. Qi, H. Qin, et al., Path aggregation network for instance segmentation, IEEE Conf. on Computer Vision and Pattern Recognition, Hawaii, America, 2018, 8759–8768.
  21. [21] S. Woo, J. Park, J.-Y. Lee, CBAM: Convolutional block attention module, arXiv:1807.06521, 2018.
  22. [22] J. Hu, L. Shen, and G. Sun, Squeeze-and-excitation networks, IEEE Conf. on Computer Vision and Pattern Recognition, Munich, Germany, 2018, 7132–7141.
  23. [23] P. Purkait, C. Zhao, and C. Zach, SPP-Net: Deep absolute pose regression with synthetic views, arXiv:1712.03452, 2017.

Important Links:

Go Back