FINDING PASSABLE REGIONS FOR ROBOTS FROM A SINGLE STILL IMAGE

J. Tian,∗,∗∗ W. Dong,∗ and Y. Tang∗

References

  1. [1] R. Swain, M. Devy, & S. Hutchinson, Sensor-based navigation in cluttered environments, IEEE/RSJ Int. Conf. on Intelligent Robots and System, 2001, 1662–1669.
  2. [2] R. Gutierrez-Osuna, J. Janet, & R.C. Luo, Modeling of ultrasonic range sensors for localization of autonomous mobile robots, IEEE Transactions on Industrial Electronics, 45(4), 1998, 654–662.
  3. [3] D.B. Rosen & A. Rosen, A robotic neural net based visualsensory motor control system that reverse engineers the motor control functions of the human brain, Proc. Int. Joint Conference on Neural Networks, Orlando, Florida, USA, 2007, 12–17.
  4. [4] D. Scharstein & R. Szeliski, A taxonomy and evaluation of dense two-frame stereo correspondence algorithms, International Journal of Computer Vision, 47(1), 2002, 7–42.
  5. [5] K. Kidono, J. Miura, & Y. Shirai, Autonomous visual navigation of a mobile robot using a human-guided experience, Robotis and Autonomous Systems, 40(2–3), 2002, 121–130.
  6. [6] D. Marr, Vision, (San Francisco: W.H. Freeman, 1982).
  7. [7] J.M. Loomis, Looking down is looking up, Nature, 414, 2001, 155–156.
  8. [8] A. Saxena, S.H. Chung, & A.Y. Ng, 3-D depth reconstruction from a single still image, International Journal of Computer Vision , 76(1), 2007, 53–69.
  9. [9] I. Ulrich & I. Nourbakhsh, Appearance-based obstacle detection with monocular color vision, Proc. AAAI Conference, Austin, TX, 2000, 866–871.
  10. [10] E. Eade & T. Drummond, Edge landmarks in monocular slam, BMVC, 2006.
  11. [11] L.M. Lorigo, R.A. Brooks, & W.E.L. Grimson, Visuallyguided obstacle avoidance in unstructured environments, Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Grenoble, France, 1997, 373–379.
  12. [12] D. Coombs, M. Herman, T. Hong, & M. Nashman, Real-time obstacle avoidance using central flow divergence and peripheral flow, IEEE Transactions on Robotics and Automation, 14(1), 1998, 49–59.
  13. [13] E. Royer, J. Bom, M. Dhome, B. Thuilot, M. Lhuillier, & F. Marmoiton, Outdoor autonomous navigation using monocular vision, Int. Conf. on Intelligent Robots and Systems, 2005, 3395–3400.
  14. [14] W.N. Klarquist & W.S. Geisler, Maximum likelihood depth from defocus for active vision, Proc. IEEE Conf. on Intelligent Robots and Systems, 1995, 374–379.
  15. [15] A.N. Rajagopalan, S. Chaudhuri, & M. Uma, Depth estimation and image restoration using defocused stereo pairs, IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(11), 2004, 1521–1525.
  16. [16] J. Marshall, C. Burbeck, D. Ariely, J. Rolland, & K. Martin, Occlusion edge blur: A cue to relative visual depth, Journal of the Optical Society of America A, 13, 1996, 681–688.
  17. [17] I. Büulthoff, H. Büulthoff, & P. Sinha, Top-down influences on stereoscopic depth-perception. Nature Neuroscience 1(3), 1998, 254–257.
  18. [18] J.J. Gibson, The ecological approach to visual perception, (Boston, MA: Houghton Miffilin Company, 1979).
  19. [19] F.J. Canny, A computational approach to edge detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(6), 1986, 679–698.
  20. [20] C.R. Wren, A. Azarbayejani, T. Darrell, & A. Pentland, Pfinder: Real-time tracking of the human body, IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 1997, 780–785.
  21. [21] J. Weng, T.Y. Luwang, H. Lu, & X.Y. Xue, A multilayer inplace learning network for development of general invariances, International Journal of Humanoid Robotics, 4(2), 2007, 281– 320.
  22. [22] J. Yao & Z.F. Zhang, Hierarchical shadow detection for color aerial images, Computer Vision and Image Understanding, 102(1), 2006, 60–69.
  23. [23] J.A. Shufelt, Performance evaluation and analysis of monocular building extraction from aerial imagery, IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(4),1999, 311–326.

Important Links:

Go Back