FINDING PASSABLE REGIONS FOR ROBOTS FROM A SINGLE STILL IMAGE

J. Tian,∗,∗∗ W. Dong,∗ and Y. Tang∗

References

  1. [1] R. Swain, M. Devy, & S. Hutchinson, Sensor-based navigationin cluttered environments, IEEE/RSJ Int. Conf. on IntelligentRobots and System, 2001, 1662–1669.
  2. [2] R. Gutierrez-Osuna, J. Janet, & R.C. Luo, Modeling of ul-trasonic range sensors for localization of autonomous mobilerobots, IEEE Transactions on Industrial Electronics, 45(4),1998, 654–662.
  3. [3] D.B. Rosen & A. Rosen, A robotic neural net based visual-sensory motor control system that reverse engineers the motorcontrol functions of the human brain, Proc. Int. Joint Con-ference on Neural Networks, Orlando, Florida, USA, 2007,12–17.
  4. [4] D. Scharstein & R. Szeliski, A taxonomy and evaluation ofdense two-frame stereo correspondence algorithms, Interna-tional Journal of Computer Vision, 47(1), 2002, 7–42.
  5. [5] K. Kidono, J. Miura, & Y. Shirai, Autonomous visual nav-igation of a mobile robot using a human-guided experience,Robotis and Autonomous Systems, 40(2–3), 2002, 121–130.
  6. [6] D. Marr, Vision, (San Francisco: W.H. Freeman, 1982).
  7. [7] J.M. Loomis, Looking down is looking up, Nature, 414, 2001,155–156.
  8. [8] A. Saxena, S.H. Chung, & A.Y. Ng, 3-D depth reconstructionfrom a single still image, International Journal of ComputerVision , 76(1), 2007, 53–69.
  9. [9] I. Ulrich & I. Nourbakhsh, Appearance-based obstacle detectionwith monocular color vision, Proc. AAAI Conference, Austin,TX, 2000, 866–871.
  10. [10] E. Eade & T. Drummond, Edge landmarks in monocular slam,BMVC, 2006.
  11. [11] L.M. Lorigo, R.A. Brooks, & W.E.L. Grimson, Visually-guided obstacle avoidance in unstructured environments, Proc.IEEE/RSJ Int. Conf. on Intelligent Robots and Systems,Grenoble, France, 1997, 373–379.
  12. [12] D. Coombs, M. Herman, T. Hong, & M. Nashman, Real-timeobstacle avoidance using central flow divergence and peripheralflow, IEEE Transactions on Robotics and Automation, 14(1),1998, 49–59.
  13. [13] E. Royer, J. Bom, M. Dhome, B. Thuilot, M. Lhuillier, & F.Marmoiton, Outdoor autonomous navigation using monocularvision, Int. Conf. on Intelligent Robots and Systems, 2005,3395–3400.
  14. [14] W.N. Klarquist & W.S. Geisler, Maximum likelihood depthfrom defocus for active vision, Proc. IEEE Conf. on IntelligentRobots and Systems, 1995, 374–379.
  15. [15] A.N. Rajagopalan, S. Chaudhuri, & M. Uma, Depth estimationand image restoration using defocused stereo pairs, IEEETransactions on Pattern Analysis and Machine Intelligence,26(11), 2004, 1521–1525.
  16. [16] J. Marshall, C. Burbeck, D. Ariely, J. Rolland, & K. Martin,Occlusion edge blur: A cue to relative visual depth, Journalof the Optical Society of America A, 13, 1996, 681–688.
  17. [17] I. Büulthoff, H. Büulthoff, & P. Sinha, Top-down influenceson stereoscopic depth-perception. Nature Neuroscience 1(3),1998, 254–257.
  18. [18] J.J. Gibson, The ecological approach to visual perception,(Boston, MA: Houghton Miffilin Company, 1979).
  19. [19] F.J. Canny, A computational approach to edge detection, IEEETransactions on Pattern Analysis and Machine Intelligence,8(6), 1986, 679–698.
  20. [20] C.R. Wren, A. Azarbayejani, T. Darrell, & A. Pentland,Pfinder: Real-time tracking of the human body, IEEE Trans-actions on Pattern Analysis and Machine Intelligence, 19(7),1997, 780–785.
  21. [21] J. Weng, T.Y. Luwang, H. Lu, & X.Y. Xue, A multilayer in-place learning network for development of general invariances,International Journal of Humanoid Robotics, 4(2), 2007, 281–320.
  22. [22] J. Yao & Z.F. Zhang, Hierarchical shadow detection for coloraerial images, Computer Vision and Image Understanding,102(1), 2006, 60–69.
  23. [23] J.A. Shufelt, Performance evaluation and analysis of monocularbuilding extraction from aerial imagery, IEEE Transactionson Pattern Analysis and Machine Intelligence, 21(4),1999,311–326.

Important Links:

Go Back