T. Iwase, A. Nakamura, and Y. Kuno (Japan)
Computational Intelligence, Interdisciplinary Applications of Computer Science and Technology, Wheelchair, intention understanding, speech interface, mobile robot, computer vision
With the increase in the number of senior citizens, there is a growing demand for human-friendly wheelchairs as mobility aids. Speech can be one of the desirable interface means for such wheelchairs. However, the user may expect different actions of the wheelchair for the same voice command depending on the situation if we allow simple commands such as "Right." This paper presents a robotic wheelchair with a speech interface that can understand the user's intention in speech using the environmental information obtained from the range sensors. Even if the user does not say details and issues a simple voice command, the wheelchair takes an appropriate action that the user expects. We have developed a working system and experimental results using the system confirm the usefulness of our approach.
Important Links:
Go Back