Tom Vodopivec and Branko Šter


  1. [1] J. Tani, Model-based learning for mobile robot navigation from the dynamical systems perspective, IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics, 26(3), 1996, 421–436.
  2. [2] M.J. Mataric, Integration of representation into goal-driven behavior-based robots, IEEE Transactions on Robotics and Automation, 8(3), 1992, 304–312.
  3. [3] S. Thrun, Learning metric-topological maps for indoor mobile robot navigation, Artificial Intelligence, 99, 1998, 21–71.
  4. [4] M. Cummins and P. Newman, FAB-MAP: probabilistic localization and mapping in the space of appearance, International Journal of Robotics Research, 27(6), 2008, 647–665.
  5. [5] S. Thrun, C. Martin, Y. Liu, D. Hahnel, and et al., A real-time expectation maximization algorithm for acquiring multi-planar maps of indoor environments with mobile robots, IEEE Transactions on Robotics and Automation, 20(3), 2004, 433–442.
  6. [6] B.J.A. Kroese and J.W.M. van Dam, Learning to avoid collisions: a reinforcement learning paradigm for mobile robot navigation, Proc. 1992 IFAC/IFIP/IMACS Symposium on Artificial Intelligence in Real-Time control, IFAC, 1992.
  7. [7] B. Šter, An integrated learning approach to environment modelling in mobile robot navigation, Neurocomputing, 57, 2004, 215–238.
  8. [8] R.J. Williams and D. Zipser, A learning algorithm for continually running fully recurrent neural networks, Neural Computation, 1, 1989, 270–280.
  9. [9] B. Šter and A. Dobnikar, Modelling the environment of a mobile robot with the embedded flow state machine, Journal of Intelligent and Robotic Systems, 46(2), 2006, 181–199.
  10. [10] B. Šter, Selective recurrent neural network, Neural Processing Letters, 38(1), 2013, 1–15.
  11. [11] Y. Bengio, P. Simard, and P. Frasconi, Learning long-term dependencies with gradient descent is difficult, IEEE Transactions on Neural Networks, 5(2), 1994, 157–166.

Important Links:

Go Back