RESEARCH ON ELEVATOR GROUP SCHEDULING STRATEGY AND SIMULATION BASED ON REINFORCEMENT LEARNING ALGORITHM, 251-259.

Rui Tian and Weimin Gao

References

  1. [1] J.F. Du, Analysis on the types of network group events (Vienna,Austria: International Press, 2009).
  2. [2] Z. Yang and C. Shao, The current situation and developmentdirection of elevator group control technology, Control andDecision, 20(12), 2005, 1321–1331.
  3. [3] L. Li, R. Zhu, L. Sui, Y. Li, M. Xu, and H. Fan,Overview of reinforcement learning methods for intelligentcluster systems, Journal of Computer Science, 2023, 1–24.http://kns.cnki.net/kcms/detail/11.1826.TP.20230713.1500.004.html
  4. [4] J.R. Zhang, H.Y. Li, J.R. Miao, Y. Wang, and H.L.Zhang, Elevator group control system scheduling model andits improved ADMM decomposition algorithm, Control andDecision, 38(01), 2023, 39–48.
  5. [5] J. Tavoosi, A novel recurrent type-2 fuzzy neural networkfor stepper motor control, Mechatronic Systems and Control,49(1), 2021, 30–35.
  6. [6] J.Q. Dong, Research on dispatching optimization of elevatorgroup control system for large public buildings (Suzhou:Suzhou University of Science and Technology, 2022). DOI:10.27748/d.cnki.gszkj.2022.000150.
  7. [7] A. Kumar, Reinforcement learning: Application and advancestowards stable control strategies, Mechatronic Systems andControl, 51(1), 2023, 53–57.
  8. [8] Q.C. Bian, Application of elevator group control control systembased on improved ABC algorithm in intelligent buildings,Journal of Jiamusi University (Natural Science Edition),41(01), 2023, 63–67 (in Chinese).
  9. [9] C. Yuan and L.B. Sun, Optimization design and simulationof intelligent elevator group control system, Industrial ControlComputer, 36(06), 2023, 65–67+70 (in Chinese).
  10. [10] G.C. Li, Application of intelligent technology in elevator controlsystems, Today’s Manufacturing and Upgrading, (09), 2022,52–55.
  11. [11] K. Kurosawa and K. Hirasawa, Intelligent and supervisorycontrol for elevator group, Transactions of InformationProcessing Society of Japan, 26(2), 1985, 278–287.
  12. [12] S. Hikita, Service limitation of elevator-analysis of optimaloperation by SA method, Proc. IEEJ Annual Conf., 1986,1931–1932.
  13. [13] A. Fujino, T. Tobita, and K. Yoneda, An on-line tuningmethod for multi-objective control of elevator group, Proc.of IECON’ 92, San Diego, CA, 1992, 795–800. DOI:10.1109/IECON.1992.254529.
  14. [14] T. Zhang and L. Shi, Fault analysis of transmission line basedon big data algorithm, Mechatronic Systems and Control,50(4), 2022, 216–223.
  15. [15] H. Kitano, Genetic algorithm (Tokyo: Sangyo Tosho K.K.,1993), 328–330.
  16. [16] A. Fujino, T. Tobita, K. Segawa, K. Yoneda, and A. Togawa,An elevator group control system with floor attribute controlmethod and system optimization using genetic algorithms,IEEE Transactions on Industrial Electronics, 1997, 546–552.DOI: 10.1109/IECON.1995.484173.
  17. [17] J.-H. Kim and B.-R. Moon, Adaptive elevator group controlwith cameras, IEEE Transactions on Industrial Electronics,48(2), 2001, 377–382.
  18. [18] S. Takahashi, H. Kita, H. Suzuki, T. Sudo, and S. Markon,Simulation-based optimization of a controller for multi-car elevators using a genetic algorithm for noisy fitnessfunction, Proc. of the 2003 Congress on EvolutionaryComputation-CEC 2003, Canberra, Australia, 2003. DOI:10.1109/cec.2003.1299861.
  19. [19] F.X. Guo, Application of neural network in elevator groupcontrol technology (Harbin: Harbin Engineering University,2018) (in Chinese).
  20. [20] Y. Gao, J.K. Hu, B.N. Wang, and D.L. Wang, Elevator groupcontrol dispatching based on CMAC network reinforcementlearning, Journal of Electronics, (02), 2007, 362–368 (inChinese).
  21. [21] G.S. Xing, Research on elevator dynamic scheduling strategybased on reinforcement learning algorithm (Tianjin: TianjinUniversity, 2006).
  22. [22] J. Sawma, F. Khatounian, E. Monmasson, R. Ghosn, and L.Idkhajine, The effect of prediction horizons in MPC for firstorder linear systems, Proc. 2018 IEEE International Conf.on Industrial Technology (ICIT) IEEE, Lyon, 2018, 316–321.DOI: 10.1109/ICIT.2018.8352196.
  23. [23] R.S. Sutton and A.G. Barto, Reinforcement learning: Anintroduction, 2nd ed. (Cambridge, MA: MIT Press, 2018).
  24. [24] A. Panin and P. Shvechikov, Practical reinforcement learning(Moscow: Courser and National Research University HigherSchool of Economics. 2017).
  25. [25] H. Yu, Research on fresh product logistics transportationscheduling based on deep reinforcement learning, ScientificProgramming, 2022, 2022, 8750580.
  26. [26] K. Chen, J. Lei, and B. Li, The research on evasionstrategy of unpowered aircraft based on deep reinforcementlearning, Journal of Physics: Conference Series, 2252,2022.
  27. [27] L. Zheng, M. Liu, S. Zhang, and J. Lan, A novel sensorscheduling algorithm based on deep reinforcement learningfor bearing-only target tracking in UWSNs, Journal ofAutomation: English Edition, 10.4(2023), 1077–1079. DOI:10.1109/JAS.2023.123159.
  28. [28] S.J. Jerry and F. Farbod, A neural network model-based control method for a class of discrete-time nonlinearsystems, Mechatronic Systems and Control, 49(2), 2021,93–100.
  29. [29] Z. Yaghoubi and H.A. Talebi, Consensus tracking for nonlinearfractional-order multi-agent systems using adaptive slidingmode controller, Mechatronic Systems and Control, 47(4),2019, 194–200.258

Important Links:

Go Back