Menghan Hu, Yuzhen Chen, Guangtao Zhai, Zhongpai Gao, and Lei Fan
[1]. On thebasis of overall visual ability, the grades of vision functionconsist of five levels by the WTO in 1973, which are lowvision level 1, low vision level 2, low vision level 3, blindlevel 1 and blind level 2.The use of classification based on the type of vision im-pairments can provide a better guidance for the design anddevelopment of assistive devices. By counselling clinicalophthalmologists and examining literature, we summarizethe categorization in line with the type of vision impair-ments as follows: (1) decrease in the sensitivity of the light,(2) blurred vision (caused by retinal anomaly or refractiveerror), (3) vision loss and (4) total blindness. One eyedisease may lead to multiple kinds of vision impairmentsat the same time, so we generate the simulated images inthe computer and summarize them in Fig. 2. As shown inthe first line of Fig. 2, people with glaucoma in the early582stage lose their peripheral visual field, and then a tubularvision slowly appears as the disease deteriorates. The sec-ond line of Fig. 2 demonstrates that visual impairment ofAMD is mainly manifested as central vision loss. RP isan incurable eye disease, and the eyesight of persons withRP will get worse as the disease progresses (the third lineof Fig. 2). Uncorrected refractive errors can be correctedby the use of a diopter lens (the last line of Fig. 2). Thereason for the exclusion of cataract in Fig. 2 is that thecataract can be treated. These simulated images (Fig. 2)can provide the guidance for the design and developmentof assistive devices.4. Substitutive Sense for Visual PerceptionVision impairments will alter the perception mechanism ofVIP and blind people. Due to the total or partial absenceof visual perception function, VIPs and blind people willbe more dependent on other senses such as somatosenseand audition.Based on the literature survey, we summarize a treediagram to demonstrate the existing substitutive sensesfor visual perception (Fig. 3). VIPs can see the out-side world by means of the vision-enhancement techniques.One research group in the Harvard University focusedon vision-enhancement techniques and used them to ex-pand the visual field of VIPs [24], [25]. Hu et al. [26]attempted to develop a see-through glass associated withthreshold-based enhancement algorithm to assist the peo-ple with night blindness. The visual prosthesis, one of thesubstitutive senses, directly displays the feedback informa-tion on the visual cortex in the human brain by the useof phosphene phenomenon [27], [28]. We will not discussthis feedback form as it involves some issues of medicalresearches, which is beyond the purpose of this survey.Readers can refer to the literature [29] for more informationwith respect to visual prosthesis. Thermal feedback, oneof the somatosenses, can take advantage of temperaturefluctuation on the human body surface to remind usersof changes in the external environment. L´ecuyer [30] de-signed a virtual reality system for VIPs to explore virtualenvironments. They used thermal feedback generated by12 infrared lamps to simulate the virtual sun. Thermalfeedback is highly influenced by ambient temperature, and,therefore, it is difficult to be perceived by users in somecircumstances. Olfactory and gustation are two rare chem-ical feedback approaches, and they are seldom applied inassistive devices. VIPs and blind people mainly adopt theaudition and tactus to take in information from the outsideworld and then process it to shape a right worldview thathelps them understand life and make wise decisions. Themajority of assistive devices use the audition and tactusamong all feedback methods. In the following sections, wewill emphatically review these feedback ways.Spatial reference frames are of importance due to theirrelevance to navigation and mental mapping for VIPs andblind people [31]. We can refer to Hall’s extra-personalspace definition [32] to select the suitable substitutive sensefor visual perception. Figure 4 illustrates the sensing rangesof these substitutive senses at different spatial scales [33].Figure 3. Tree diagram of substitutive sense for visualperception.Figure 4. Hall’s extra-personal space definition with minorrevision for personal space.Furthermore, Tversky [34] concluded more complex andefficient spatial thinking models. In one of her reports, shestated that there was a mental space except for the externalspace. Mental space is constructed from what we perceive,aided by what we think and infer, in the service of actionin the world or imagined in the mind [35]. Spatial thinkingin mental space can help VIPs and blind people createrepresentations of a real-world space. More detailed workconducted by Pasqualotto et al. [36] showed differencesbetween congenitally blind and late-blind people in theirspatial reference frame preferences. Moreover, the sameresearch group gave visual-like exposure to those whocannot see a room and thus provided allocentric referenceframe information using auditory devices [37]. Hence, theunderstanding of mental space may be beneficial to thedesign of assistance devices.4.1 AuditionThe term audition is used to conclude all auditory percep-tion means in assistive devices. This summative term maybe incomprehensive, but it can be applied for referenceby relevant researchers and interested readers. The soundprocessing speed of VIPs and blind people is faster thanthat of sighted people [38]. Moreover, auditory memory583and retrieval abilities of congenitally blind people aresuperior to those of sighted people [39]. Survey resultsof the questionnaire showed that blind people of Iran aremore inclined to use audio media rather than other me-dia to access or utilize information [40]. Similar researchwas conducted by Kolarik et al. [41]. They found thatVIPs outperformed sighted people in three cases: (1) whenfollowing the conversation switched from one person to an-other, (2) when locating the multiple speakers and (3) whenseparating the speech from music.Findings of the previous literature evidence are inaccordance with the perceptual enhancement hypothesis,that is VIPs and blind people will attempt to develop theability of other senses to compensate for visual impairment[42], [43]. A recent survey concluded that complete blindpeople at an early stage show the superior performancein spatial hearing in the horizontal plane, but the perfor-mance in the vertical plane is unsatisfactory [44]. Thespatial sound resolution ability of blind people is relativelylow when they use the allocentric frame of reference. Be-sides, compared to early-onset blind individuals, late-onsetblind people perform better in terms of spatial hearing.This indicates that the early visual experience is of greatsignificance for the development of spatial hearing. Al-though VIPs and blind individuals exhibit better auditoryprocessing ability, their brain region related to languageprocessing is degraded [45]. This may be attributed tothe fact that they seldom participate in social activities.The improvement of audio ability of VIPs and blind peo-ple is targeted, and it requires lengthy time to learn howto perceive the outside world using audition instead ofvision.In Fig. 3, we classify the auditory feedback into twocategories viz. speech and nonspeech. The principle ofspeech feedback is to convert the ambient information intolinguistic information [46], and subsequently, VIPs andblind people receive speech instructions via the earphoneor speaker. Speech feedback is simple and intuitive, andthe user can understand it without any learning process.Nevertheless, in some situations, speech feedback takesa longer time to describe the surrounding circumstances.There is no doubt that the user will feel annoyed andirritated [47]. Furthermore, delays in receiving informationcan even cause some irreversible accidents. The nonspeechfeedback alerts the user using the music, environmentalsounds or some artificial sounds [48]. In recent years,investigators have designed a variety of nonspeech cuessuch as spindex [49], spearcons [50] and audemes [51] tomeet different application requirements. Although there isa learning process for nonspeech interface, this instructioncan quickly convey information to users, which can addressdeficiencies of speech feedback. Researches carried outby Hussain et al. have validated the previous statement[52]–[54].4.2 TactusTactus or haptic perception [55], one of the somatosenses,can be further separated into three parts, namely touchfeeling, vibration and electric stimulus feeling (Fig. 3).It is difficult to distinguish concepts of touch feelingand vibration. In our opinion, the touch feeling means thefeeling given by the texture of an object in contact whenwe stroke or touch this object. The vibration means thefeeling caused by external forces. Because the stimulationamount of touch feeling is less than that of vibration, fewinvestigators have applied the touch feeling as feedback inassistive devices. Our research group used an electricalcompass and a servo-driven pointer to develop an indoorlocalization system [56]. This system can give the directioninformation to the user with touch stimulation. In terms ofelectric stimulus feeling, it is arisen by electrical stimulationand can be used for a visual prosthesis. Like audition,tactus is also commonly used in a feedback interface forassistive devices.Heller et al. [57] systematically investigated the hapticpattern perceived by blind individuals. They stated thatthe tactus is a crucial sense, which can be used to substitutefor vision. Results of the experiments conducted by Occelliet al. [58] show that people with early-onset blindnessreflect greater haptic sensitivity than the sighted. Theyalso validated the hypothesis that people losing visionearly can recognize objects by their haptic perceptionregardless of spatial transformations. Picard et al. [59]invited children, adolescents and young adults to comparetheir haptic memory capacities. The result demonstratedthat the haptic memory ability is an age-related skill.Carpio et al. [60] found that there is no significantdifference between blind and sighted school students incontent acquisition or aesthetic appreciation of images.This indicates that the blind people can experience theworld through their haptic perception and eventually reachthe same cognitive level of sighted people. The researchfindings of Puspitawati et al. [61] showed that, comparedto VIPs with slight visual impairment, the people withtotal blindness have the faster speed of processing hapticinformation. This may further illustrate that, for VIPs andblind people, dependence on tactile perception increaseswith the severity of visual impairment. Therefore, afeedback module of an assistive device can be designed tomeet the needs of the people with varying degrees of visualimpairment.5. Assistive Devices for Blind and VisuallyImpaired PersonsAssistive technology, one of the information accessibilitytechnologies, has attracted considerable attention world-wide owing to its remarkable social significance [4], [62].Over the past decade, a variety of assistive devices havebeen developed for functional assistances of VIPs and blindpeople. We summarize these devices in the following sec-tions. In Tables 1 and 2, although several assistancedevices offer the same functionality, there exist differ-ences in types of sensors used, feedback modes, hardwareframeworks and data processing algorithms. Validationexperiments are important for assistance devices, and,therefore, investigators design different experiments, aim-ing to verify their feasibilities and reliabilities of completingthe specific task.584Table 1Summary of Assistive Canes for VIPs and Blind People about Sensors Used and Feedback Producedas well as Validation MethodsStudy Sensor Feedback Functionality ValidationGupta et al.[76]Ultrasonic sensor;GPS receiverAudition Navigation Tested in computerFan et al. [86] Ultrasonic sensor;GPS receiver;RGB-D cameraAudition;vibrationNavigation Tested inoutdoor open areaSilva andDias [90]Ultrasonic sensor;inertia measurementunitAudition Obstacle detection Tested by obstacles inthe pathKumar et al.[75]Ultrasonic sensor Audition Obstacle and potholesdetectionTested by 10volunteersMajeed andBaadel [73]RGB camera with270◦lensAudition Facial recognition Tested in databaseSatpute et al.[91]Ultrasonic sensor;GPS receiverAudition;vibrationNavigation; obstacledetectionNoneRizzo et al.[92]Adaptive mobilitydevicesVibration Drop-off and obstacledetectionTested by 6 adultsShah et al.[77]Ultrasonic sensor Audition Navigation; obstacleand potholesdetectionNoneSharma et al.[78]Ultrasonic sensor Audition;vibrationStatic and dynamicobstacles detectionTested in real-timeenvironmentKrishnanet al. [81]Ultrasonic sensor;GPS receiver;RGB cameraAudition Navigation; obstacledetectionTested in databaseBolgiano andMeeks [70]Laser Audition;vibrationObstacle detection NoneSugimotoet al. [93]Ultrasonic sensor;GPS receiverVibration Navigation; obstacledetectionTested in presetscenariosWankhadeet al. [94]Infrared sensor Audition;vibrationObstacle detection NoneKassim et al.[88]RFID network;digital compassAudition Indoor navigation Tested by the mobilerobot and humansubjectVera et al. [71] RGB camera; laserpointerVibration Obstacle detection Tested by 16 sightedpersonsAlwis andSamarawickrama[95]Ultrasonic sensor Audition;vibrationObstacle detection NonePisa et al. [89] FMCW radar None Obstacle detection Tested by obstaclewith differentdistancesBuchs et al. [80] Infrared sensors Audition;vibrationWaist-up obstaclesdetectionTested by the trainedblind participants(continued )585Table 1ContinuedPinto et al. [96] Ultrasonic sensor;GPS receiverAudition;vibrationObstacle detection Tested by obstaclewith differentdistancesYe et al. [74] 3D camera Audition Obstacle detection;pose estimationValidated by datafrom various indoorscenesDang et al. [72] Linear laser;RGB camera;inertial measure-mentunitAudition Obstacle detectionand recognitionValidated by theobstacles with variousheights, types,distancesNiitsu et al. [82] Ultrasonic sensor;infrared sensor;compass; tri-axialaccelerometerAudition(boneconduction)Obstacle detection Examined in 20 timesby 1 userTakizawa et al.[87]Kinect sensor Vibration Object recognition Tested by 2blindfolded personsJeong and Yu [97] Ultrasonic sensor Vibration Obstacle detection Tested by 4 blindfolded and 10 blindpersonsBay AdvancedTechnologies Ltd.[79]Ultrasonic sensor Audition;vibrationObstacle detection NoneScherlen et al. [83] Infrared sensor;brilliance;water sensorsNone Object recognition NoneKim et al. [84] Ultrasonic sensor;colour sensor;Cds photo resistorAudition;vibrationObstacle detection Validate the usabilityby 7 types of criteriaShim and Yoon[85]Ultrasonic sensor;infrared sensor;contact sensor(two antennas)Audition Obstacle detection NoneNearly all assistive devices listed later belong to SSDs.SSDs have been around for 40 years. The vibrotactilesensors were usually placed on the back to develop as-sistive device [63]. Subsequently, some investigators putan artificial sensor on the tongue [64]. The latter is theantecedent to the commercial BrainPort that is cited inTable 3. More recent, and highly promising, is the audi-tory device The vOICe [65]. It has been studied exten-sively for localization [66] and object identification [67].There have been numerous neuroscience studies showingthat The vOICe activates visual cortex in the blind asthey perform tasks with images – suggesting that one cantruly ‘see’ with the sound output of the device [68]. Thesedevices in early stages have been widely validated in vari-ous tasks, settings and user groups. Thus the success anduse are more easily ascertained than many devices cited inTables 1 and 2.5.1 Vision Substitution by Assistive CanesThe use of assistive cane is critical in reducing the risk ofcollision, which can help VIPs and blind people to walkmore confidently. Table 1 summarizes some assistive canesdesigned for VIPs and blind people.In general, an assistive cane is developed by mountingsensing and feedback modules on a classic white cane. Sub-sequently, the assistive cane acquires information with re-spect to surroundings and transmits raw or (pre-)processeddata to users via predefined feedback approach [69].Bolgiano and Meeks [70] first put a laser into a caneto detect obstacles in the traveling path, and audio andvibratory signals were available when VIPs and blindpeople approach the obstacle.Vera et al. [71] used an RGB camera and a laserpointer in combination to develop a virtual white cane for586Table 2Summary of Assistive Glasses for VIPs and Blind People about Sensors Used and Feedback Producedas well as Validation MethodsStudy Sensor Feedback Functionality ValidationSadi et al. [98] Ultrasonic sensor Audition Obstacle detection Tested in lab conditionsKassim et al. [99] Ultrasonic sensor Audition;vibrationObstacle detection Validated by blind spotevaluation experimentYi and Tian [100] RGB camera Audition Text reading fromnatural sceneTested by 10 blind personsEverding et al.[102]RGB camera Audition Obstacle detection Tested by 2 experiments(11 and 5 persons,respectively)Wang et al. [103] RGB camera Audition Navigation; wayfindingEvaluated in databasesHassan andTang [101]RGB camera Audition Text recognition Tested by severalsample textsPundlik et al.[104]Google Glass Vision Smartphone screenmagnificationEvaluated by 8 sighted and4 visually impaired personsNeto et al. [105] RGB-D camera 3D audition Face recognition Validated in databases andby both blindfolded andvisually impaired usersStoll et al. [106] RGB-D camera Audition Indoor navigation Validated by 2 performancemetrics i.e. travel timeand errorHicks et al. [107] RGB-D camera Vision Scene recognitionand analysisTested by 4 sighted and12 visually impairedparticipantsWu et al. [108] Pico projector;optical lensesVision Vision enhancement In simulated stageLan et al. [112] RGB camera Audition Public signrecognitionTested by some commonpublic signsHu et al. [26] RGB camera Vision Night visionenhancementEvaluated on custom-builtdatabasesVIPs and blind people. In their device, the RGB camerain smartphone captures the laser beam reflection, and thedistance from the cane to the obstacle is calculated usingactive triangulation. Through the personalized vibrationgenerated by smartphone, the user will be warned if pos-sible obstacles are located in traveling path. Furthermore,the magnitude of vibration is applied for the quantizationof distance. Results of validated experiments demonstratedthat the travel time of virtual white cane is less than thatof the traditional white cane. The assistive cane equippedwith the point laser may fail to detect the potholes and theobstacles in small and tiny size.Dang et al. [72] proposed an assistive cane using alinear laser, an RGB camera and an inertial measurementunit as sensors to classify the type of obstacle and estimatethe distance from the obstacle to the user. The inertialmeasurement unit is an electronic device that measures auser’s angular rate to determine spatial coordinate frames.The inertial sensor tracks the position of laser stripe in thenavigation coordinate frame, and the subsequent analysisof the laser point coordinates in regard to the originallaser stripe can divide obstacles into walls, stairs andblocks. The information gathered is transmitted to theuser via a simple nonspeech feedback. The performanceof this assistive cane is easily influenced by the strongillumination, thereby limiting the application scope of thisassistive cane.Due to the limited detecting or scanning range whenusing the laser as a sensor, we can only detect objectslocated in the region where the laser illuminates. To over-come this shortcoming, we need to leverage spatial infor-mation recorded by RGB camera. Majeed and Baadel[73] integrated an RGB camera with 270◦lens into anassistive cane, thus allowing us to capture much of envi-ronmental information. The proposed smart cane can helpVIPs and blind people to dodge obstacles placed at the587Table 3Summary of Some Assistive Glasses Which Are Available on the MarketName Company LaunchdateFunctionality Brief descriptionGoogleGlass [113]Google Inc. 2012 Direction recognition It is equipped with the RGB camera andgyroscope and has all the functions ofmobile phone. As feedback, it can transmit theinformation to the user via the bone-conductionearphone and display screen. Google Glass is notdesigned for visual assistance of the VIPs andblind people, but we can do secondarydevelopment based on iteSight 3[114]eSight Co. 2017 No specific functiondescriptionIt is mainly designed for the individuals who arenot completely blind. A high speed and qualitycamera is loaded in this glass to capture whatthe user is browsing. The obtained videos arefirst subjected to image-enhancement processingand then shown in two OLED screens. From thedisplay way, eSight 3 is something like the virtualreality display deviceOrCam[115]OrCamTechnologiesLtd.2015 Text reading; facerecognition; product andmoney identificationOrCam mainly consists of the RGB camera andportable computer. It can be fixed on any eyeglassframe and informs the user outside informationvia the audio signalsEnchroma[116]Enchroma,Inc.2013 Colour contrastenhancementEnchroma is designed for the colour blindness.It does not leverage any digital processingtechnology. Enchroma alters the original wavesusing the specially designed lenses to help thepersons of colour vision deficiency see thereal colourIntoer [117] HangzhouKR-VISIONTechnologyCo., Ltd.2017 Obstacle detection;scene, money, puddle,staircase, traffic signal andzebra crossing recognition;navigationIt uses the infrared binocular camera to recordthe environmental information illuminated by thenatural and structural light. It produces thespecial encoded stereo to inform the user viathe bone-conduction earphoneBrainPort rV100 [118]Wicab, Inc. 2015 Obstacle detection;scene recognitionBrainPort rV100 is mainly composed of the RGBcamera mounted on a pair of glasses, hand-heldcontroller and tongue array containing 400electrodes. The outside information is convertedinto electrical signals that are sent to the tonguearray on the tongue of the user. Before using thisdevice, there is a training phasemaximum distance of 10 m, and moreover, it can be utilizedto recognize different persons’ faces.Ye et al. [74] used a three-dimensional (3D) camera asa sensor to develop an assistive cane, aiming to estimatingpose and recognizing obstacle. The type of 3D cameraused in their study is SwissRanger SR4000, which is asmall-sized (65 × 65 × 68 mm3) 3D time-of-flight camera.The speech feedback module serves as the communicationmedia between human and cane. This assistive canewas validated by data collected from a variety of indoorscenes. Results demonstrated that the proposed canecould estimate pose and recognize objects with satisfactoryperformance. In their article, developers stated that theywere working with orientation and mobility specialists aswell as blind trainees of the World Service for the Blind inArkansas to refine functions of their assistive cane.Apart from the laser and RGB camera, the ultrasonicsensor is one of the widely used sensors in assistive de-vice owing to its high-price/performance ratio. The ultra-sonic sensor emits ultrasonic waves in the air, and thenthe reflected sound is received by the sensor. This sen-sor is always applied for detecting objects and measuringdistance. Kumar et al. [75] developed an ultrasonic canefor aiding the blind people to navigate. This ultrasonic cane588is equipped with three pairs of ultrasonic trans-receivers,thus enabling the blind people to know aerial and groundobstacles as well as potholes in front of them via audiowarnings. The maximum working range of this ultrasoniccane is 1.5 m, which is much less than that of the canedeveloped by Majeed and Baadel.Gupta et al. [76] used an ultrasonic sensor and aGPS receiver together in classic canes. The addition ofGPS module allows VIPs and blind people to travel out-doors using satellite network. Audio signals generated byPygame module, a programming module to create gamesand animations, were used as the feedback to remind users.The range of distance measured by the attached ultrasonicsensor in cane is from 0.05 to 2 m, which is slightly largerthan that of the device developed by Kumar et al.Several investigators reported that they used an ultra-sonic sensor to establish assistive canes. Shah et al. [77]arranged four ultrasonic sensors in a stick. Among theseultrasonic sensors, three ultrasonic sensors are applied forobstacle detection and the remaining one for pothole de-tection. Their experimental results showed that maximumdetection distances of the ultrasonic stick were 1.45, 0.6and 0.82 m when the obstacles located on the front, left-front and right-front, respectively. A similar smart stickwas reported by Sharma et al. [78]. They stated that thissmart stick was able to perceive obstacles of any height infront of or slightly sideways to users. Bay Advanced Tech-nologies Ltd. [79] developed an ultrasonic sensor-basedassistive cane named ‘K’ Sonar, and this cane was availableon the market.Infrared sensor is also a very popular sensor selectedby investigators for the development of the smart cane. Itis an electronic sensor, which works by using a specific lightsensor to detect a selected light wavelength in the infraredspectrum. This sensor can detect infrared light radiatingfrom objects in its view field to detect object and measuredistance. Buchs et al. [80] mounted two infrared sensorson a white cane. One infrared sensor was parallel to thehorizontal plane while the other was approximately 42◦with respect to the horizontal plane. Such arrangement ofinfrared sensors allows this smart cane to detect waist-upobstacles. The detection range of this cane is only 1.5 m.The addition of RGB camera can increase the detectionrange of developed smart cane. Krishnan et al. [81] appliedan ultrasonic sensor and an RGB camera in the sensingmode of smart cane, and the testing result demonstratedthat the maximum detection range was 3 m.Infrared sensor is usually used in conjunction withother types of sensors to form the multi-mode sensingarray. Niitsu et al. [82] put four sensors viz. ultra-sonic sensor, infrared sensor, compass and tri-axial ac-celerometer together on a classic cane. In this smart cane,a bone-conduction headphone was used for human–caneinteraction in such a way that the feedback informationcould be passed to users unobtrusively. This assistivecane based on multi-mode sensing array can achieve thedetection accuracy of 100% for wide obstacles, crossingand approaching persons, while 95% for thin obstacles.It should be noted that the bone conduction may haveinterference with several brain functions. Scherlen et al.[83] leveraged an infrared sensor, a brilliance sensor and awater sensor in combination to develop a smart cane named‘RecognizeCane’, which was capable of recognizing objectsand their constituent materials. At present, four mate-rials, namely metal (steel), glass, cardboard and plastic,can be successfully recognized. Also, the ‘RecognizeCane’can distinguish the zebra crossing and water puddle usingbrilliance and water sensors, respectively. The brilliancesensor was also adopted by Kim et al. [84] in their smartcane to measure environmental brightness information. Todetect obstacles in front accurately, two antennas used asthe contact sensors, an ultrasonic sensor and an infraredsensor, were attached to a sensing unit of a smart cane byShim and Yoon [85]. With the aid of contact sensors, thissmart cane can effectively complement for ultrasonic andinfrared sensors for detection of short-range obstacles.Fan et al. [86], respectively, applied an RGB-D cameraand an ultrasonic sensor to acquire dynamic visual envi-ronmental information and detect obstacles around. TheRGB-D camera is able to obtain synchronized videos ofboth colour and depth. To implement outdoor navigation,they added a GPS module into the sensing unit. Results ofvalidation experiments conducted in the open area demon-strated that the assistive cane installed in this sensingunit can help VIPs and blind people to travel outdoorssafely. However, this cane cannot process the image datacaptured by RGB-D camera in real time. Takizawa et al.[87] also used an RGB-D camera in their sensing unit,and they called this developed cane as the Kinect cane.By the use of RGB-D camera, the Kinect cane can rec-ognize different types of indoor obstacles, including chair,staircase and floor. Two blindfolded persons were invitedto test the performance of proposed cane, and obtainedresults showed that the average search time by Kinectcane was significantly shorter than that by classic whitecane.Some other sensors are also used in sensing unit ofassistive cane. Kassim et al. [88] mounted radio frequencyidentification (RFID) transponders on the floor and theninstalled an RFID reader at the end of cane. RFID is atechnology that records the presence of an object usingradio signals. When walking, the RFID reader reads RFIDtags arranged on the floor in advance, and the addressesof these tags are sent for map processing. Subsequently,the auditory interface emits voice commands such as 90◦turn left after digital compass calibration. Results ofsmall-sample experiment containing two human subjectsshowed that the RFID-based smart cane has a potentialto help VIPs and blind people to walk independently inindoor environments. Frequency-modulated continuouswave (FMCW) radars and antennas were housed in a classicwhite cane by Pisa et al. [89] for obstacle detection. Theresult showed that this cane could receive reflections froma metallic panel up to 5 m. FMCW radar is a short-rangemeasuring radar set capable of determining the distance ofobject in its view field.The assistive cane belongs to the portable assistivedevice. It is compact and lightweight, thus it is easilytaken by users. Despite these advantages, the assistivecane needs to interact with users constantly.5895.2 Vision Substitution by Assistive GlassesAssistive glass is one of the wearable assistive devices. InTable 2, some assistive glasses designed for VIPs and blindpeople are presented. The assistive glass in general fixessensing and feedback modules on a classic glass. Unlike theassistive cane, the assistive glass in general uses the visualsignal as the feedback for users.Sadi et al. [98] embedded an ultrasonic sensor ina traditional glass to develop a smart glass for walkingassistance. The sensing region of attached ultrasonic sensorcovers 3 m distance and 60◦angle. Processed informationthat corresponds to the distance of obstacle is sent tousers via audio signals. Validation experiments carried outin the lab showed that detection accuracies of proposedglass were all beyond 93%. Kassim et al. [99] comparedthe performance of three sensors inclusive of an ultrasonicsensor, an infrared sensor and a laser range by takingseveral metrics such as accuracy, size and weight intoaccount. Finally, they selected ultrasonic sensors for thedevelopment of their assistive glass. As feedback, twowarning modes viz. audition and vibration were designedin their device and users could switch the warning modebased on her or his preference or environment around.Kassim et al. gave an example: when a user comes to anoisy environment such as bus terminal or market, he or shecan use the vibration mode instead of auditory mode, thusallowing the audio sense to hear ambient sounds. A blindspot evaluation experiment demonstrated the effectivenessof proposed smart glass.Except for the ultrasonic sensor, the RGB camera isalso commonly used in the sensing unit of assistive glass,and there are four publications that used RGB cameras toobtain outside information in Table 2. Yi and Tian [100]applied an RGB camera equipped on a glass for assistingVIPs to access text information in their daily lives. Theyreported that the further study should focus on improvingthe detection accuracy of scene text hidden in clutteredbackground. One possible solution for this is to exploremore effective feature representations to establish morerobust models, and subsequently, we write the obtainedmodel into a processing unit of smart glass. A similarresearch was conducted by Hassan and Tang [101]. Theirsmart glass is only suitable for recognizing the text onhardcopy materials. Inspired by the principle of humanvisual perception, Everding et al. [102] deployed two RGBcameras on a classic glass to imitate two human retinas.The performance of their smart glass is satisfactory whensubjects are static. For moving tests, the performance isstill unknown. Wang et al. [103] embedded a saliency mapalgorithm into an RGB camera-based smart glass for thedetection of indoor signs. Experimental results on theirdatabases containing indoor signs and doors showed theusability of their glass. The output information of fourabovementioned publications is all delivered to users usingthe audio form.Pundlik et al. [104] did the secondary development forGoogle Glass to magnify the screen content of smartphone,thereby helping VIPs to easily access information displayedon the screen. They invited eight sighted and four VIP toemploy calculator and music player apps on smartphonewith the aid of proposed glass and built-in screen zoom appof phone. Comparison results showed that the assistiveglass based on Google Glass outperformed the built-inscreen zoom software in improving the ability of VIPs toread screen content.As the RGB-D camera can acquire both colour anddistance information, it has been widely used in assistiveglass. Neto et al. [105] directly tied a Microsoft Kinectsensor to the user’s head, and this assistive device informedthe user outside information via 3D audio signal. Thishardware architecture is somewhat abrupt. The similarhardware framework was adopted by Stoll et al. [106]. Af-ter validation experiments on 21 blindfolded young adultswith 1-week interval, they deemed that this system waspromising for indoor use, but still inefficient for outdoorscenarios. Hicks et al. [107] improved the hardwarearchitecture and made it more like glass. They convertedscene data obtained by RGB-D camera into a depth mapthat nearby objects were rendered into brighter. Subse-quently, processed depth images were displayed on twoOLED screens. With the validation experiment, for VIPs,the average detection distance was approximately 3 m.Hence, further work needs to be done for increasing thedetection distance of objects. The possible solution to thisis to change the mechanical architecture of the glasses asthe see-through display.Wu et al. [108] designed a compact see-through near-eye display system that could be used for the personswho are hyperopic. Unlike most assistive devices, thissystem does not use any digital processing technologies.The main principle of this system is that the light emittedby objects at a distance goes through preset asphericalsurfaces, and the user can see the relatively clear imageof object. According to their simulated results, the finalimage provided for users is nearly identical to the originalimage. However, the reduced brightness and distortionin image corners are also observed. This glass that canenhance vision ability of people with presbyopia is still indesign phase.Hu et al. [26] attempted to develop a see-throughglass to assist the persons who suffer from the nyctalopia.They first analysed the vision model of night blindness andthen derived the relationship between luminance levels andRGB grey scale of the image to develop the enhancementalgorithm. Experimental results showed that the bright-ness of raw dark image could be significantly improved bythe use of proposed algorithm. After the spatial distanceand camera lens calibrations, the processed image is ableto perfectly align with the view seen by users.Apart from previous assistive glasses which are still atan engineering or concept stage, several assistive glasseshave been available on the market. These commercializedglasses for visual assistance are summarized in Table 3.Google Glass is usually used for the secondary develop-ment, and many assistive glasses not listed in our surveyare developed based on Google Glass [109], [110]. Target-ing ends of eSight 3 are VIPs, and therefore, developersplace two OLED display screens in front of user’s eyesto play processed videos. Sensors of OrCam and Intoer590Table 4Summary of Some Assistive Devices with Various FormsStudy Modality Sensor Feedback Functionality ValidationWang et al.[119]None RGB-DcameraAudition Detection of stairs,pedestrian crosswalks andtraffic signsevaluated on databasesSatue andMiah [120]None UltrasonicsensorNerve stimulation;audition; vibrationObstacle detection Tested in predefinedenvironmentsSekhar et al.[121]None StereocamerasAudition Obstacle detection Compared with the othersystemsRao et al.[122]None Laser device;RGB cameraNone Pothole and unevensurface detectionValidated by theperformance metricGharani andKarimi [123]None RGB camera None Context-aware obstacledetectionCompared with the othertwo algorithms usingdifferent performancemetricsPattanshettiet al. [128]Hat Ultrasonicsensor;GPS receiver;RGB cameraAudition; vibration Currency recognition;obstacle detection;NavigationNoneReshma [125] Belt UltrasonicsensorAudition Obstacle detection Tested by 4 blindfolded personsWattal et al.[126]Belt UltrasonicsensorAudition Obstacle detection Compared the measuredand actual distance andposition of obstacleMocanu et al.[127]Belt Ultrasonicsensor; RGBcameraAudition Obstacle detection andrecognitionTested by 21 visuallyimpaired subjectsFronemanet al. [138]Belt UltrasonicsensorVibration Obstacle detection Evaluated by variouscommon static householdobstaclesBhatlawandeet al. [129]Bracelet UltrasonicsensorAudition; vibration Way-finding; obstacledetectionTested by 2 blindfoldedpersonsRangarajanand Benslija[130]RoboticdogForce sensor;RGB cameraAudition Obstacle detection;word recognitionTested on flat groundand slopeLin et al.[131]Smartphone RGB camera Audition Obstacle detection andrecognitionTested by 4 visuallyimpaired personsLee et al.[132]Jacket Ultrasonicsensor;GPS receiver;RGB camera;magneticcompasssensorAudition; vibration Navigation; obstacledetectionTested with various deviceconfigurations in differentenvironmentsKim andSong [133]Wheelchair UltrasonicsensorNone Obstacle detection Tested at different movingspeedsAltaha andRhee [137]Cane;jacket; gloveUltrasonicsensor3D audition Obstacle detection Tested by the blind person(continued )591Table 4ContinuedMekhalfiet al. [139]Jacket Laser sensor;RGB cameraAudition Indoor scene description Tested in databasesBhatlawandeet al. [135]Bracelet;BeltUltrasonicsensor; RGBcameraAudition; vibration Obstacle detection Tested by 15 trained blindpersonsSivagamiet al. [136]Glasses; belt UltrasonicsensorAudition Obstacle detection Tested by the blindfoldedpersonsWu et al.[140]WheeledrobotsUltrasonicsensor; RGBcamera;RFID readerNone Indoor navigation Tested on the predefinedpathSpiers andDollar [141]Hand-heldcubeUWBtransmitterShape-changingtactusIndoor navigation Tested by the sightedpersonsFang et al.[134]Flashlight RGB camera;structuredlightAudition Obstacle detection Evaluated on custom-builtdatabasesare an RGB camera and an infrared binocular camera,respectively. These two products both use the audio signalas feedback to inform users. Enchroma is designed for theassistance of colour blindness. Like the study conductedby Wu et al. [108], this product achieves its functional-ity (here is colour contrast enhancement) using a speciallydesigned lens, instead of any digital processing technolo-gies. The sensing unit of BrainPort rV100 is similarto above-mentioned products, and the only difference isthat it leverages the electric stimulus feeling as feedback.Developers of BrainPort rV100 consider that the tongueis extremely sensitive to electric stimulus, and hence, theyplace the tongue array which contains 400 electrodes inthe user’s tongue. This indicates that the resolution ofBrainPort rV100 is 20×20 pixels. The intensity of stimu-lation represents the pixel intensity of the image obtainedby RGB camera. In addition, due to the low resolution oftongue array, the background of the raw image requires tobe eliminated [111].5.3 Vision Substitution by Other Forms of AssistiveDevicesTable 4 summarizes some assistive devices with variousforms except for canes and glasses.Several investigators only provide a core component ofassistive device. By the use of an RGB-D image, Wanget al. [119] developed an imaging processing algorithm-based Hough transform for detection and recognition ofstairs, pedestrian crosswalks and traffic signals. Resultstested on their RGB-D databases showed the effectivenessof this system. Satue and Miah [120] applied an ultra-sonic sensor to detect obstacles and then combined theelectric stimulus, audition and vibration to warn the blindpeople of dangerous situations. As feedback, they placedthe nerve stimulator unit on the wrist, and this unit wouldgive an electric shock below the safe limit of human nervestimulation according to the distance of obstacle. Sekharet al. [121] used a real-time stereo vision algorithm writtenin FPGA to detect obstacles. A matching algorithm calledzero-mean sun of absolute differences can maximize thehardware utilization, and therefore, their system is appli-cable to real-time applications. Rao et al. [122] combineda laser and an RGB camera in their assistive system to re-alize the pothole and uneven surface detection. From theirstudy, we find that the laser can be served as the structurallight for detecting various obstacles. Gharani and Karimi[123] calculated the optical flow between two consecutiveRGB images and extracted feature points based on thetexture of object and movement of the user. Experimentalresults showed that the combined use of optical flow andpoint track algorithms was capable of detecting both mov-ing and stationary obstacles which were close to the RGBcamera.There existed the assistive devices in the othermodalities:Belt is a widely used modality for assistive device [124].Reshma [125] furnished five ultrasonic sensors around thebelt. This spatial arrangement of sensors allowed us todetect obstacles within the circle of 5 m in diameter.A similar assistive belt was reported by Wattal et al.[126] and the maximum detection distance was also 5 m.Mocanu et al. [127] used one RGB camera and fourultrasonic sensors in their visual assistive belt. A totalof 21 VIPs were involved in the evaluation experiment,and results demonstrated that the developed assistive beltcould recognize both static and moving objects in highlydynamic urban scenes. Besides, each subject expressed agood experience.Pattanshetti et al. [128] developed an assistive hat,which consisted of an ultrasonic sensor and an RGBcamera for obstacle detection and currency identification,592respectively. To achieve the outdoor navigation, theyleveraged a GPS module in mobile phone.Bhatlawande et al. [129] developed an ultrasonicbracelet for independent mobility of VIPs and blind people.With on-demand hand movements, this bracelet can warnthe user of the obstacles in the range from 0.2 to 6 m.Alerting signals were then sent to users via audition andvibration.Rangarajan and Benslija [130] reported a voice recogni-tion robotic dog that could guide VIPs and blind people tothe destination avoiding obstacles and traffic. This roboticdog had been successfully tested on the flat ground andslope. Lin et al. [131] directly used a built-in RGB cameraof smartphone to detect and recognize obstacles. However,the recognition accuracy of obstacle in their study was only60%. In the real world, this is insufficient for VIPs andblind people to avoid obstacles around them.Lee et al. [132] put an ultrasonic sensor array, a GPSreceiver, an RGB camera and a magnetic compass sensoron the jacket to help VIPs and blind people to traveloutdoors. This assistive jacket had been tested with variousdevice configurations in different environments, and resultsdemonstrated that the sensor and receiver network had apotential ability to guarantee the safe outdoor navigation.Kim and Song [133] extended the functionality of aclassic wheelchair by adding multiple ultrasonic sensors,and the wheelchair can therefore execute efficient obsta-cle searching. The excellent performance had been ob-served when the updated wheelchair was tested at differentmoving speeds.An assistive flashlight was designed by Fang et al.,who used an RGB camera and a structured light generatedby a laser array to detect obstacles [134]. The laser of highrefresh rate was used to achieve a visual bifurcation effectso that people around could not perceive the laser light butthe camera could capture it. Therefore, the flashlight canoperate in an unobtrusive pattern.To further improve the performance of assistive device,some investigators simultaneously used several modalitiesof assistive devices to reach the specific assistive purposes.Bhatlawande et al. [135] installed an RGB camera and anultrasonic sensor on a belt and a bracelet, respectively, forassisting the blind people in walking. Based on results ofevaluation experiment with 15 blind people, the dual-modeassistive device exhibited excellent performance: 93.33%participants expressed satisfaction, 86.66% comprehendedits operational convenience and 80% appreciated the com-fort of the system. Sivagami et al. [136] also developeddual-mode assistive devices containing two modalities viz.glasses and a belt for VIPs and blind people to travel underunknown circumstances. Altaha and Rhee [137] proposedthree different modalities viz. jacket, glove and cane forobstacle detection. They arranged three ultrasonic sensorson the front, left and right sides, respectively, thus allow-ing us not only to detect the presence of nearby objectsbut also to measure the distance of objects from users.We suggest that they can in future use these three assistivedevices in combination to increase the detection range anddistance.6. Conclusion and ProspectiveAlthough numerous assistive devices are available, theyare not yet effectively adopted by VIPs and blind people.One reason is that these assistive devices can only act ina restricted spatial range due to their limited sensors andfeedback modes. The other reason is that the performanceof these assistive devices is not effectively validated. Asshown in the aforementioned tables, in many cases, onlyblindfolded sighted subjects were invited to validation ex-periments. Actually, cognitive strategies observed in VIPsand blind people are significantly different from those inblindfolded sighted subjects.In this section, we will next discuss three prospectivesfor assistive devices to conclude this survey: (1) increasethe diversity of input and output information to guaranteethe reliability of assistive device, (2) develop the assis-tive device based on perception mechanism and behaviourpattern of VIPs and blind people and (3) design morereliable experiments to validate the feasibility of assistivedevice.The diversity of feedback can increase the reliability offinal assistive devices. The multimodal feedback, includ-ing audition, thermal and vibration was embedded intothe virtual reality system, which allows VIPs and blindpeople to explore and navigate inside virtual environments[30]. Simultaneously, the use of sensor fusion frameworkfor assistive device allows us to obtain more importantinformation about the surrounding environment. Rizzoet al. [142] found that the depth information extractedfrom a stereoscopic camera system could ignore specificpotential collision hazards, and the addition of infraredsensors could offer a reliable distance measurement to re-move this inconsistency of depth inferred from stereo im-ages. Hence, for the specific task, if used sensors giveinconsistent measurements, the alternate sensing modalitycan be chosen to remedy this inconsistency.Study of changes in the connectivity of the functionalareas of the human brain can help us understand thechange in perception mechanism of VIPs and blind people[143]. Because congenitally blind people rely more on audi-tion or tactus information, the connectivity of multisensorybrain areas of them will be more complicated [144]. There-fore, the introduction of brain imaging is essential for thedesign of assistive devices. Luckily, there are some reviewsavailable in a recent special issue of ‘Neuroscience andBiobehavioral Reviews’ that cover the spectrum of SSDsand their relevance for understanding the human brain(http://www.sciencedirect.com/science/journal/01497634/41). In addition, we can develop better assistive devicesaccording to the idea of bionics [145].Currently, the performance of assistive devices is rarelyor inadequately validated by VIPs and blind individu-als. As cognitive strategies of VIPs and sighted peopleare significantly different, it is not guaranteed that theperformance validated by sighted blindfolded people rep-resents that by VIPs and blind people [69]. Therefore, it isvery necessary to invite numerous VIPs and blind peoplefrom different blind associations to test the performanceof developed assistive device. Furthermore, real-world593scenarios are far more complicated, and testing environ-ments should fully cover any possible application scenario.AcknowledgementThis work was sponsored by the Shanghai Sailing Pro-gram (No. 19YF1414100), the National Natural ScienceFoundation of China (No. 61831015, No. 61901172), theSTCSM (No. 18DZ2270700), and the China PostdoctoralScience Foundation funded project (No. 2016M600315).The authors would also like to acknowledge Ms. HuijingHuang, Ms. Shuping Li, and Mr. Joel Disu for providingassistance with the English language revision.References[1] World Health Organization, Visual impairment and blindness(2017). Available from: http://www.who.int/mediacentre/factsheets/fs282/en/. [2] M. Gori, G. Cappagli, A. Tonelli, G. Baud-Bovy, andS. Finocchietti, Devices for visually impaired people: Hightechnological devices with low user acceptance and no adapt-ability for children, Neuroscience & Biobehavioral Reviews,69(Supplement C), 2016, 79–88. [4], [62].Over the past decade, a variety of assistive devices havebeen developed for functional assistances of VIPs and blindpeople. We summarize these devices in the following sec-tions. In Tables 1 and 2, although several assistancedevices offer the same functionality, there exist differ-ences in types of sensors used, feedback modes, hardwareframeworks and data processing algorithms. Validationexperiments are important for assistance devices, and,therefore, investigators design different experiments, aim-ing to verify their feasibilities and reliabilities of completingthe specific task.584Table 1Summary of Assistive Canes for VIPs and Blind People about Sensors Used and Feedback Producedas well as Validation MethodsStudy Sensor Feedback Functionality ValidationGupta et al.[76]Ultrasonic sensor;GPS receiverAudition Navigation Tested in computerFan et al. [86] Ultrasonic sensor;GPS receiver;RGB-D cameraAudition;vibrationNavigation Tested inoutdoor open areaSilva andDias [90]Ultrasonic sensor;inertia measurementunitAudition Obstacle detection Tested by obstacles inthe pathKumar et al.[75]Ultrasonic sensor Audition Obstacle and potholesdetectionTested by 10volunteersMajeed andBaadel [73]RGB camera with270◦lensAudition Facial recognition Tested in databaseSatpute et al.[91]Ultrasonic sensor;GPS receiverAudition;vibrationNavigation; obstacledetectionNoneRizzo et al.[92]Adaptive mobilitydevicesVibration Drop-off and obstacledetectionTested by 6 adultsShah et al.[77]Ultrasonic sensor Audition Navigation; obstacleand potholesdetectionNoneSharma et al.[78]Ultrasonic sensor Audition;vibrationStatic and dynamicobstacles detectionTested in real-timeenvironmentKrishnanet al. [81]Ultrasonic sensor;GPS receiver;RGB cameraAudition Navigation; obstacledetectionTested in databaseBolgiano andMeeks [70]Laser Audition;vibrationObstacle detection NoneSugimotoet al. [93]Ultrasonic sensor;GPS receiverVibration Navigation; obstacledetectionTested in presetscenariosWankhadeet al. [94]Infrared sensor Audition;vibrationObstacle detection NoneKassim et al.[88]RFID network;digital compassAudition Indoor navigation Tested by the mobilerobot and humansubjectVera et al. [71] RGB camera; laserpointerVibration Obstacle detection Tested by 16 sightedpersonsAlwis andSamarawickrama[95]Ultrasonic sensor Audition;vibrationObstacle detection NonePisa et al. [89] FMCW radar None Obstacle detection Tested by obstaclewith differentdistancesBuchs et al. [80] Infrared sensors Audition;vibrationWaist-up obstaclesdetectionTested by the trainedblind participants(continued )585Table 1ContinuedPinto et al. [96] Ultrasonic sensor;GPS receiverAudition;vibrationObstacle detection Tested by obstaclewith differentdistancesYe et al. [74] 3D camera Audition Obstacle detection;pose estimationValidated by datafrom various indoorscenesDang et al. [72] Linear laser;RGB camera;inertial measure-mentunitAudition Obstacle detectionand recognitionValidated by theobstacles with variousheights, types,distancesNiitsu et al. [82] Ultrasonic sensor;infrared sensor;compass; tri-axialaccelerometerAudition(boneconduction)Obstacle detection Examined in 20 timesby 1 userTakizawa et al.[87]Kinect sensor Vibration Object recognition Tested by 2blindfolded personsJeong and Yu [97] Ultrasonic sensor Vibration Obstacle detection Tested by 4 blindfolded and 10 blindpersonsBay AdvancedTechnologies Ltd.[79]Ultrasonic sensor Audition;vibrationObstacle detection NoneScherlen et al. [83] Infrared sensor;brilliance;water sensorsNone Object recognition NoneKim et al. [84] Ultrasonic sensor;colour sensor;Cds photo resistorAudition;vibrationObstacle detection Validate the usabilityby 7 types of criteriaShim and Yoon[85]Ultrasonic sensor;infrared sensor;contact sensor(two antennas)Audition Obstacle detection NoneNearly all assistive devices listed later belong to SSDs.SSDs have been around for 40 years. The vibrotactilesensors were usually placed on the back to develop as-sistive device [63]. Subsequently, some investigators putan artificial sensor on the tongue [64]. The latter is theantecedent to the commercial BrainPort that is cited inTable 3. More recent, and highly promising, is the audi-tory device The vOICe [65]. It has been studied exten-sively for localization [66] and object identification [67].There have been numerous neuroscience studies showingthat The vOICe activates visual cortex in the blind asthey perform tasks with images – suggesting that one cantruly ‘see’ with the sound output of the device [68]. Thesedevices in early stages have been widely validated in vari-ous tasks, settings and user groups. Thus the success anduse are more easily ascertained than many devices cited inTables 1 and 2.5.1 Vision Substitution by Assistive CanesThe use of assistive cane is critical in reducing the risk ofcollision, which can help VIPs and blind people to walkmore confidently. Table 1 summarizes some assistive canesdesigned for VIPs and blind people.In general, an assistive cane is developed by mountingsensing and feedback modules on a classic white cane. Sub-sequently, the assistive cane acquires information with re-spect to surroundings and transmits raw or (pre-)processeddata to users via predefined feedback approach [69].Bolgiano and Meeks [70] first put a laser into a caneto detect obstacles in the traveling path, and audio andvibratory signals were available when VIPs and blindpeople approach the obstacle.Vera et al. [71] used an RGB camera and a laserpointer in combination to develop a virtual white cane for586Table 2Summary of Assistive Glasses for VIPs and Blind People about Sensors Used and Feedback Producedas well as Validation MethodsStudy Sensor Feedback Functionality ValidationSadi et al. [98] Ultrasonic sensor Audition Obstacle detection Tested in lab conditionsKassim et al. [99] Ultrasonic sensor Audition;vibrationObstacle detection Validated by blind spotevaluation experimentYi and Tian [100] RGB camera Audition Text reading fromnatural sceneTested by 10 blind personsEverding et al.[102]RGB camera Audition Obstacle detection Tested by 2 experiments(11 and 5 persons,respectively)Wang et al. [103] RGB camera Audition Navigation; wayfindingEvaluated in databasesHassan andTang [101]RGB camera Audition Text recognition Tested by severalsample textsPundlik et al.[104]Google Glass Vision Smartphone screenmagnificationEvaluated by 8 sighted and4 visually impaired personsNeto et al. [105] RGB-D camera 3D audition Face recognition Validated in databases andby both blindfolded andvisually impaired usersStoll et al. [106] RGB-D camera Audition Indoor navigation Validated by 2 performancemetrics i.e. travel timeand errorHicks et al. [107] RGB-D camera Vision Scene recognitionand analysisTested by 4 sighted and12 visually impairedparticipantsWu et al. [108] Pico projector;optical lensesVision Vision enhancement In simulated stageLan et al. [112] RGB camera Audition Public signrecognitionTested by some commonpublic signsHu et al. [26] RGB camera Vision Night visionenhancementEvaluated on custom-builtdatabasesVIPs and blind people. In their device, the RGB camerain smartphone captures the laser beam reflection, and thedistance from the cane to the obstacle is calculated usingactive triangulation. Through the personalized vibrationgenerated by smartphone, the user will be warned if pos-sible obstacles are located in traveling path. Furthermore,the magnitude of vibration is applied for the quantizationof distance. Results of validated experiments demonstratedthat the travel time of virtual white cane is less than thatof the traditional white cane. The assistive cane equippedwith the point laser may fail to detect the potholes and theobstacles in small and tiny size.Dang et al. [72] proposed an assistive cane using alinear laser, an RGB camera and an inertial measurementunit as sensors to classify the type of obstacle and estimatethe distance from the obstacle to the user. The inertialmeasurement unit is an electronic device that measures auser’s angular rate to determine spatial coordinate frames.The inertial sensor tracks the position of laser stripe in thenavigation coordinate frame, and the subsequent analysisof the laser point coordinates in regard to the originallaser stripe can divide obstacles into walls, stairs andblocks. The information gathered is transmitted to theuser via a simple nonspeech feedback. The performanceof this assistive cane is easily influenced by the strongillumination, thereby limiting the application scope of thisassistive cane.Due to the limited detecting or scanning range whenusing the laser as a sensor, we can only detect objectslocated in the region where the laser illuminates. To over-come this shortcoming, we need to leverage spatial infor-mation recorded by RGB camera. Majeed and Baadel[73] integrated an RGB camera with 270◦lens into anassistive cane, thus allowing us to capture much of envi-ronmental information. The proposed smart cane can helpVIPs and blind people to dodge obstacles placed at the587Table 3Summary of Some Assistive Glasses Which Are Available on the MarketName Company LaunchdateFunctionality Brief descriptionGoogleGlass [113]Google Inc. 2012 Direction recognition It is equipped with the RGB camera andgyroscope and has all the functions ofmobile phone. As feedback, it can transmit theinformation to the user via the bone-conductionearphone and display screen. Google Glass is notdesigned for visual assistance of the VIPs andblind people, but we can do secondarydevelopment based on iteSight 3[114]eSight Co. 2017 No specific functiondescriptionIt is mainly designed for the individuals who arenot completely blind. A high speed and qualitycamera is loaded in this glass to capture whatthe user is browsing. The obtained videos arefirst subjected to image-enhancement processingand then shown in two OLED screens. From thedisplay way, eSight 3 is something like the virtualreality display deviceOrCam[115]OrCamTechnologiesLtd.2015 Text reading; facerecognition; product andmoney identificationOrCam mainly consists of the RGB camera andportable computer. It can be fixed on any eyeglassframe and informs the user outside informationvia the audio signalsEnchroma[116]Enchroma,Inc.2013 Colour contrastenhancementEnchroma is designed for the colour blindness.It does not leverage any digital processingtechnology. Enchroma alters the original wavesusing the specially designed lenses to help thepersons of colour vision deficiency see thereal colourIntoer [117] HangzhouKR-VISIONTechnologyCo., Ltd.2017 Obstacle detection;scene, money, puddle,staircase, traffic signal andzebra crossing recognition;navigationIt uses the infrared binocular camera to recordthe environmental information illuminated by thenatural and structural light. It produces thespecial encoded stereo to inform the user viathe bone-conduction earphoneBrainPort rV100 [118]Wicab, Inc. 2015 Obstacle detection;scene recognitionBrainPort rV100 is mainly composed of the RGBcamera mounted on a pair of glasses, hand-heldcontroller and tongue array containing 400electrodes. The outside information is convertedinto electrical signals that are sent to the tonguearray on the tongue of the user. Before using thisdevice, there is a training phasemaximum distance of 10 m, and moreover, it can be utilizedto recognize different persons’ faces.Ye et al. [74] used a three-dimensional (3D) camera asa sensor to develop an assistive cane, aiming to estimatingpose and recognizing obstacle. The type of 3D cameraused in their study is SwissRanger SR4000, which is asmall-sized (65 × 65 × 68 mm3) 3D time-of-flight camera.The speech feedback module serves as the communicationmedia between human and cane. This assistive canewas validated by data collected from a variety of indoorscenes. Results demonstrated that the proposed canecould estimate pose and recognize objects with satisfactoryperformance. In their article, developers stated that theywere working with orientation and mobility specialists aswell as blind trainees of the World Service for the Blind inArkansas to refine functions of their assistive cane.Apart from the laser and RGB camera, the ultrasonicsensor is one of the widely used sensors in assistive de-vice owing to its high-price/performance ratio. The ultra-sonic sensor emits ultrasonic waves in the air, and thenthe reflected sound is received by the sensor. This sen-sor is always applied for detecting objects and measuringdistance. Kumar et al. [75] developed an ultrasonic canefor aiding the blind people to navigate. This ultrasonic cane588is equipped with three pairs of ultrasonic trans-receivers,thus enabling the blind people to know aerial and groundobstacles as well as potholes in front of them via audiowarnings. The maximum working range of this ultrasoniccane is 1.5 m, which is much less than that of the canedeveloped by Majeed and Baadel.Gupta et al. [76] used an ultrasonic sensor and aGPS receiver together in classic canes. The addition ofGPS module allows VIPs and blind people to travel out-doors using satellite network. Audio signals generated byPygame module, a programming module to create gamesand animations, were used as the feedback to remind users.The range of distance measured by the attached ultrasonicsensor in cane is from 0.05 to 2 m, which is slightly largerthan that of the device developed by Kumar et al.Several investigators reported that they used an ultra-sonic sensor to establish assistive canes. Shah et al. [77]arranged four ultrasonic sensors in a stick. Among theseultrasonic sensors, three ultrasonic sensors are applied forobstacle detection and the remaining one for pothole de-tection. Their experimental results showed that maximumdetection distances of the ultrasonic stick were 1.45, 0.6and 0.82 m when the obstacles located on the front, left-front and right-front, respectively. A similar smart stickwas reported by Sharma et al. [78]. They stated that thissmart stick was able to perceive obstacles of any height infront of or slightly sideways to users. Bay Advanced Tech-nologies Ltd. [79] developed an ultrasonic sensor-basedassistive cane named ‘K’ Sonar, and this cane was availableon the market.Infrared sensor is also a very popular sensor selectedby investigators for the development of the smart cane. Itis an electronic sensor, which works by using a specific lightsensor to detect a selected light wavelength in the infraredspectrum. This sensor can detect infrared light radiatingfrom objects in its view field to detect object and measuredistance. Buchs et al. [80] mounted two infrared sensorson a white cane. One infrared sensor was parallel to thehorizontal plane while the other was approximately 42◦with respect to the horizontal plane. Such arrangement ofinfrared sensors allows this smart cane to detect waist-upobstacles. The detection range of this cane is only 1.5 m.The addition of RGB camera can increase the detectionrange of developed smart cane. Krishnan et al. [81] appliedan ultrasonic sensor and an RGB camera in the sensingmode of smart cane, and the testing result demonstratedthat the maximum detection range was 3 m.Infrared sensor is usually used in conjunction withother types of sensors to form the multi-mode sensingarray. Niitsu et al. [82] put four sensors viz. ultra-sonic sensor, infrared sensor, compass and tri-axial ac-celerometer together on a classic cane. In this smart cane,a bone-conduction headphone was used for human–caneinteraction in such a way that the feedback informationcould be passed to users unobtrusively. This assistivecane based on multi-mode sensing array can achieve thedetection accuracy of 100% for wide obstacles, crossingand approaching persons, while 95% for thin obstacles.It should be noted that the bone conduction may haveinterference with several brain functions. Scherlen et al.[83] leveraged an infrared sensor, a brilliance sensor and awater sensor in combination to develop a smart cane named‘RecognizeCane’, which was capable of recognizing objectsand their constituent materials. At present, four mate-rials, namely metal (steel), glass, cardboard and plastic,can be successfully recognized. Also, the ‘RecognizeCane’can distinguish the zebra crossing and water puddle usingbrilliance and water sensors, respectively. The brilliancesensor was also adopted by Kim et al. [84] in their smartcane to measure environmental brightness information. Todetect obstacles in front accurately, two antennas used asthe contact sensors, an ultrasonic sensor and an infraredsensor, were attached to a sensing unit of a smart cane byShim and Yoon [85]. With the aid of contact sensors, thissmart cane can effectively complement for ultrasonic andinfrared sensors for detection of short-range obstacles.Fan et al. [86], respectively, applied an RGB-D cameraand an ultrasonic sensor to acquire dynamic visual envi-ronmental information and detect obstacles around. TheRGB-D camera is able to obtain synchronized videos ofboth colour and depth. To implement outdoor navigation,they added a GPS module into the sensing unit. Results ofvalidation experiments conducted in the open area demon-strated that the assistive cane installed in this sensingunit can help VIPs and blind people to travel outdoorssafely. However, this cane cannot process the image datacaptured by RGB-D camera in real time. Takizawa et al.[87] also used an RGB-D camera in their sensing unit,and they called this developed cane as the Kinect cane.By the use of RGB-D camera, the Kinect cane can rec-ognize different types of indoor obstacles, including chair,staircase and floor. Two blindfolded persons were invitedto test the performance of proposed cane, and obtainedresults showed that the average search time by Kinectcane was significantly shorter than that by classic whitecane.Some other sensors are also used in sensing unit ofassistive cane. Kassim et al. [88] mounted radio frequencyidentification (RFID) transponders on the floor and theninstalled an RFID reader at the end of cane. RFID is atechnology that records the presence of an object usingradio signals. When walking, the RFID reader reads RFIDtags arranged on the floor in advance, and the addressesof these tags are sent for map processing. Subsequently,the auditory interface emits voice commands such as 90◦turn left after digital compass calibration. Results ofsmall-sample experiment containing two human subjectsshowed that the RFID-based smart cane has a potentialto help VIPs and blind people to walk independently inindoor environments. Frequency-modulated continuouswave (FMCW) radars and antennas were housed in a classicwhite cane by Pisa et al. [89] for obstacle detection. Theresult showed that this cane could receive reflections froma metallic panel up to 5 m. FMCW radar is a short-rangemeasuring radar set capable of determining the distance ofobject in its view field.The assistive cane belongs to the portable assistivedevice. It is compact and lightweight, thus it is easilytaken by users. Despite these advantages, the assistivecane needs to interact with users constantly.5895.2 Vision Substitution by Assistive GlassesAssistive glass is one of the wearable assistive devices. InTable 2, some assistive glasses designed for VIPs and blindpeople are presented. The assistive glass in general fixessensing and feedback modules on a classic glass. Unlike theassistive cane, the assistive glass in general uses the visualsignal as the feedback for users.Sadi et al. [98] embedded an ultrasonic sensor ina traditional glass to develop a smart glass for walkingassistance. The sensing region of attached ultrasonic sensorcovers 3 m distance and 60◦angle. Processed informationthat corresponds to the distance of obstacle is sent tousers via audio signals. Validation experiments carried outin the lab showed that detection accuracies of proposedglass were all beyond 93%. Kassim et al. [99] comparedthe performance of three sensors inclusive of an ultrasonicsensor, an infrared sensor and a laser range by takingseveral metrics such as accuracy, size and weight intoaccount. Finally, they selected ultrasonic sensors for thedevelopment of their assistive glass. As feedback, twowarning modes viz. audition and vibration were designedin their device and users could switch the warning modebased on her or his preference or environment around.Kassim et al. gave an example: when a user comes to anoisy environment such as bus terminal or market, he or shecan use the vibration mode instead of auditory mode, thusallowing the audio sense to hear ambient sounds. A blindspot evaluation experiment demonstrated the effectivenessof proposed smart glass.Except for the ultrasonic sensor, the RGB camera isalso commonly used in the sensing unit of assistive glass,and there are four publications that used RGB cameras toobtain outside information in Table 2. Yi and Tian [100]applied an RGB camera equipped on a glass for assistingVIPs to access text information in their daily lives. Theyreported that the further study should focus on improvingthe detection accuracy of scene text hidden in clutteredbackground. One possible solution for this is to exploremore effective feature representations to establish morerobust models, and subsequently, we write the obtainedmodel into a processing unit of smart glass. A similarresearch was conducted by Hassan and Tang [101]. Theirsmart glass is only suitable for recognizing the text onhardcopy materials. Inspired by the principle of humanvisual perception, Everding et al. [102] deployed two RGBcameras on a classic glass to imitate two human retinas.The performance of their smart glass is satisfactory whensubjects are static. For moving tests, the performance isstill unknown. Wang et al. [103] embedded a saliency mapalgorithm into an RGB camera-based smart glass for thedetection of indoor signs. Experimental results on theirdatabases containing indoor signs and doors showed theusability of their glass. The output information of fourabovementioned publications is all delivered to users usingthe audio form.Pundlik et al. [104] did the secondary development forGoogle Glass to magnify the screen content of smartphone,thereby helping VIPs to easily access information displayedon the screen. They invited eight sighted and four VIP toemploy calculator and music player apps on smartphonewith the aid of proposed glass and built-in screen zoom appof phone. Comparison results showed that the assistiveglass based on Google Glass outperformed the built-inscreen zoom software in improving the ability of VIPs toread screen content.As the RGB-D camera can acquire both colour anddistance information, it has been widely used in assistiveglass. Neto et al. [105] directly tied a Microsoft Kinectsensor to the user’s head, and this assistive device informedthe user outside information via 3D audio signal. Thishardware architecture is somewhat abrupt. The similarhardware framework was adopted by Stoll et al. [106]. Af-ter validation experiments on 21 blindfolded young adultswith 1-week interval, they deemed that this system waspromising for indoor use, but still inefficient for outdoorscenarios. Hicks et al. [107] improved the hardwarearchitecture and made it more like glass. They convertedscene data obtained by RGB-D camera into a depth mapthat nearby objects were rendered into brighter. Subse-quently, processed depth images were displayed on twoOLED screens. With the validation experiment, for VIPs,the average detection distance was approximately 3 m.Hence, further work needs to be done for increasing thedetection distance of objects. The possible solution to thisis to change the mechanical architecture of the glasses asthe see-through display.Wu et al. [108] designed a compact see-through near-eye display system that could be used for the personswho are hyperopic. Unlike most assistive devices, thissystem does not use any digital processing technologies.The main principle of this system is that the light emittedby objects at a distance goes through preset asphericalsurfaces, and the user can see the relatively clear imageof object. According to their simulated results, the finalimage provided for users is nearly identical to the originalimage. However, the reduced brightness and distortionin image corners are also observed. This glass that canenhance vision ability of people with presbyopia is still indesign phase.Hu et al. [26] attempted to develop a see-throughglass to assist the persons who suffer from the nyctalopia.They first analysed the vision model of night blindness andthen derived the relationship between luminance levels andRGB grey scale of the image to develop the enhancementalgorithm. Experimental results showed that the bright-ness of raw dark image could be significantly improved bythe use of proposed algorithm. After the spatial distanceand camera lens calibrations, the processed image is ableto perfectly align with the view seen by users.Apart from previous assistive glasses which are still atan engineering or concept stage, several assistive glasseshave been available on the market. These commercializedglasses for visual assistance are summarized in Table 3.Google Glass is usually used for the secondary develop-ment, and many assistive glasses not listed in our surveyare developed based on Google Glass [109], [110]. Target-ing ends of eSight 3 are VIPs, and therefore, developersplace two OLED display screens in front of user’s eyesto play processed videos. Sensors of OrCam and Intoer590Table 4Summary of Some Assistive Devices with Various FormsStudy Modality Sensor Feedback Functionality ValidationWang et al.[119]None RGB-DcameraAudition Detection of stairs,pedestrian crosswalks andtraffic signsevaluated on databasesSatue andMiah [120]None UltrasonicsensorNerve stimulation;audition; vibrationObstacle detection Tested in predefinedenvironmentsSekhar et al.[121]None StereocamerasAudition Obstacle detection Compared with the othersystemsRao et al.[122]None Laser device;RGB cameraNone Pothole and unevensurface detectionValidated by theperformance metricGharani andKarimi [123]None RGB camera None Context-aware obstacledetectionCompared with the othertwo algorithms usingdifferent performancemetricsPattanshettiet al. [128]Hat Ultrasonicsensor;GPS receiver;RGB cameraAudition; vibration Currency recognition;obstacle detection;NavigationNoneReshma [125] Belt UltrasonicsensorAudition Obstacle detection Tested by 4 blindfolded personsWattal et al.[126]Belt UltrasonicsensorAudition Obstacle detection Compared the measuredand actual distance andposition of obstacleMocanu et al.[127]Belt Ultrasonicsensor; RGBcameraAudition Obstacle detection andrecognitionTested by 21 visuallyimpaired subjectsFronemanet al. [138]Belt UltrasonicsensorVibration Obstacle detection Evaluated by variouscommon static householdobstaclesBhatlawandeet al. [129]Bracelet UltrasonicsensorAudition; vibration Way-finding; obstacledetectionTested by 2 blindfoldedpersonsRangarajanand Benslija[130]RoboticdogForce sensor;RGB cameraAudition Obstacle detection;word recognitionTested on flat groundand slopeLin et al.[131]Smartphone RGB camera Audition Obstacle detection andrecognitionTested by 4 visuallyimpaired personsLee et al.[132]Jacket Ultrasonicsensor;GPS receiver;RGB camera;magneticcompasssensorAudition; vibration Navigation; obstacledetectionTested with various deviceconfigurations in differentenvironmentsKim andSong [133]Wheelchair UltrasonicsensorNone Obstacle detection Tested at different movingspeedsAltaha andRhee [137]Cane;jacket; gloveUltrasonicsensor3D audition Obstacle detection Tested by the blind person(continued )591Table 4ContinuedMekhalfiet al. [139]Jacket Laser sensor;RGB cameraAudition Indoor scene description Tested in databasesBhatlawandeet al. [135]Bracelet;BeltUltrasonicsensor; RGBcameraAudition; vibration Obstacle detection Tested by 15 trained blindpersonsSivagamiet al. [136]Glasses; belt UltrasonicsensorAudition Obstacle detection Tested by the blindfoldedpersonsWu et al.[140]WheeledrobotsUltrasonicsensor; RGBcamera;RFID readerNone Indoor navigation Tested on the predefinedpathSpiers andDollar [141]Hand-heldcubeUWBtransmitterShape-changingtactusIndoor navigation Tested by the sightedpersonsFang et al.[134]Flashlight RGB camera;structuredlightAudition Obstacle detection Evaluated on custom-builtdatabasesare an RGB camera and an infrared binocular camera,respectively. These two products both use the audio signalas feedback to inform users. Enchroma is designed for theassistance of colour blindness. Like the study conductedby Wu et al. [108], this product achieves its functional-ity (here is colour contrast enhancement) using a speciallydesigned lens, instead of any digital processing technolo-gies. The sensing unit of BrainPort rV100 is similarto above-mentioned products, and the only difference isthat it leverages the electric stimulus feeling as feedback.Developers of BrainPort rV100 consider that the tongueis extremely sensitive to electric stimulus, and hence, theyplace the tongue array which contains 400 electrodes inthe user’s tongue. This indicates that the resolution ofBrainPort rV100 is 20×20 pixels. The intensity of stimu-lation represents the pixel intensity of the image obtainedby RGB camera. In addition, due to the low resolution oftongue array, the background of the raw image requires tobe eliminated [111].5.3 Vision Substitution by Other Forms of AssistiveDevicesTable 4 summarizes some assistive devices with variousforms except for canes and glasses.Several investigators only provide a core component ofassistive device. By the use of an RGB-D image, Wanget al. [119] developed an imaging processing algorithm-based Hough transform for detection and recognition ofstairs, pedestrian crosswalks and traffic signals. Resultstested on their RGB-D databases showed the effectivenessof this system. Satue and Miah [120] applied an ultra-sonic sensor to detect obstacles and then combined theelectric stimulus, audition and vibration to warn the blindpeople of dangerous situations. As feedback, they placedthe nerve stimulator unit on the wrist, and this unit wouldgive an electric shock below the safe limit of human nervestimulation according to the distance of obstacle. Sekharet al. [121] used a real-time stereo vision algorithm writtenin FPGA to detect obstacles. A matching algorithm calledzero-mean sun of absolute differences can maximize thehardware utilization, and therefore, their system is appli-cable to real-time applications. Rao et al. [122] combineda laser and an RGB camera in their assistive system to re-alize the pothole and uneven surface detection. From theirstudy, we find that the laser can be served as the structurallight for detecting various obstacles. Gharani and Karimi[123] calculated the optical flow between two consecutiveRGB images and extracted feature points based on thetexture of object and movement of the user. Experimentalresults showed that the combined use of optical flow andpoint track algorithms was capable of detecting both mov-ing and stationary obstacles which were close to the RGBcamera.There existed the assistive devices in the othermodalities:Belt is a widely used modality for assistive device [124].Reshma [125] furnished five ultrasonic sensors around thebelt. This spatial arrangement of sensors allowed us todetect obstacles within the circle of 5 m in diameter.A similar assistive belt was reported by Wattal et al.[126] and the maximum detection distance was also 5 m.Mocanu et al. [127] used one RGB camera and fourultrasonic sensors in their visual assistive belt. A totalof 21 VIPs were involved in the evaluation experiment,and results demonstrated that the developed assistive beltcould recognize both static and moving objects in highlydynamic urban scenes. Besides, each subject expressed agood experience.Pattanshetti et al. [128] developed an assistive hat,which consisted of an ultrasonic sensor and an RGBcamera for obstacle detection and currency identification,592respectively. To achieve the outdoor navigation, theyleveraged a GPS module in mobile phone.Bhatlawande et al. [129] developed an ultrasonicbracelet for independent mobility of VIPs and blind people.With on-demand hand movements, this bracelet can warnthe user of the obstacles in the range from 0.2 to 6 m.Alerting signals were then sent to users via audition andvibration.Rangarajan and Benslija [130] reported a voice recogni-tion robotic dog that could guide VIPs and blind people tothe destination avoiding obstacles and traffic. This roboticdog had been successfully tested on the flat ground andslope. Lin et al. [131] directly used a built-in RGB cameraof smartphone to detect and recognize obstacles. However,the recognition accuracy of obstacle in their study was only60%. In the real world, this is insufficient for VIPs andblind people to avoid obstacles around them.Lee et al. [132] put an ultrasonic sensor array, a GPSreceiver, an RGB camera and a magnetic compass sensoron the jacket to help VIPs and blind people to traveloutdoors. This assistive jacket had been tested with variousdevice configurations in different environments, and resultsdemonstrated that the sensor and receiver network had apotential ability to guarantee the safe outdoor navigation.Kim and Song [133] extended the functionality of aclassic wheelchair by adding multiple ultrasonic sensors,and the wheelchair can therefore execute efficient obsta-cle searching. The excellent performance had been ob-served when the updated wheelchair was tested at differentmoving speeds.An assistive flashlight was designed by Fang et al.,who used an RGB camera and a structured light generatedby a laser array to detect obstacles [134]. The laser of highrefresh rate was used to achieve a visual bifurcation effectso that people around could not perceive the laser light butthe camera could capture it. Therefore, the flashlight canoperate in an unobtrusive pattern.To further improve the performance of assistive device,some investigators simultaneously used several modalitiesof assistive devices to reach the specific assistive purposes.Bhatlawande et al. [135] installed an RGB camera and anultrasonic sensor on a belt and a bracelet, respectively, forassisting the blind people in walking. Based on results ofevaluation experiment with 15 blind people, the dual-modeassistive device exhibited excellent performance: 93.33%participants expressed satisfaction, 86.66% comprehendedits operational convenience and 80% appreciated the com-fort of the system. Sivagami et al. [136] also developeddual-mode assistive devices containing two modalities viz.glasses and a belt for VIPs and blind people to travel underunknown circumstances. Altaha and Rhee [137] proposedthree different modalities viz. jacket, glove and cane forobstacle detection. They arranged three ultrasonic sensorson the front, left and right sides, respectively, thus allow-ing us not only to detect the presence of nearby objectsbut also to measure the distance of objects from users.We suggest that they can in future use these three assistivedevices in combination to increase the detection range anddistance.6. Conclusion and ProspectiveAlthough numerous assistive devices are available, theyare not yet effectively adopted by VIPs and blind people.One reason is that these assistive devices can only act ina restricted spatial range due to their limited sensors andfeedback modes. The other reason is that the performanceof these assistive devices is not effectively validated. Asshown in the aforementioned tables, in many cases, onlyblindfolded sighted subjects were invited to validation ex-periments. Actually, cognitive strategies observed in VIPsand blind people are significantly different from those inblindfolded sighted subjects.In this section, we will next discuss three prospectivesfor assistive devices to conclude this survey: (1) increasethe diversity of input and output information to guaranteethe reliability of assistive device, (2) develop the assis-tive device based on perception mechanism and behaviourpattern of VIPs and blind people and (3) design morereliable experiments to validate the feasibility of assistivedevice.The diversity of feedback can increase the reliability offinal assistive devices. The multimodal feedback, includ-ing audition, thermal and vibration was embedded intothe virtual reality system, which allows VIPs and blindpeople to explore and navigate inside virtual environments[30]. Simultaneously, the use of sensor fusion frameworkfor assistive device allows us to obtain more importantinformation about the surrounding environment. Rizzoet al. [142] found that the depth information extractedfrom a stereoscopic camera system could ignore specificpotential collision hazards, and the addition of infraredsensors could offer a reliable distance measurement to re-move this inconsistency of depth inferred from stereo im-ages. Hence, for the specific task, if used sensors giveinconsistent measurements, the alternate sensing modalitycan be chosen to remedy this inconsistency.Study of changes in the connectivity of the functionalareas of the human brain can help us understand thechange in perception mechanism of VIPs and blind people[143]. Because congenitally blind people rely more on audi-tion or tactus information, the connectivity of multisensorybrain areas of them will be more complicated [144]. There-fore, the introduction of brain imaging is essential for thedesign of assistive devices. Luckily, there are some reviewsavailable in a recent special issue of ‘Neuroscience andBiobehavioral Reviews’ that cover the spectrum of SSDsand their relevance for understanding the human brain(http://www.sciencedirect.com/science/journal/01497634/41). In addition, we can develop better assistive devicesaccording to the idea of bionics [145].Currently, the performance of assistive devices is rarelyor inadequately validated by VIPs and blind individu-als. As cognitive strategies of VIPs and sighted peopleare significantly different, it is not guaranteed that theperformance validated by sighted blindfolded people rep-resents that by VIPs and blind people [69]. Therefore, it isvery necessary to invite numerous VIPs and blind peoplefrom different blind associations to test the performanceof developed assistive device. Furthermore, real-world593scenarios are far more complicated, and testing environ-ments should fully cover any possible application scenario.AcknowledgementThis work was sponsored by the Shanghai Sailing Pro-gram (No. 19YF1414100), the National Natural ScienceFoundation of China (No. 61831015, No. 61901172), theSTCSM (No. 18DZ2270700), and the China PostdoctoralScience Foundation funded project (No. 2016M600315).The authors would also like to acknowledge Ms. HuijingHuang, Ms. Shuping Li, and Mr. Joel Disu for providingassistance with the English language revision.References[1] World Health Organization, Visual impairment and blindness(2017). Available from: http://www.who.int/mediacentre/factsheets/fs282/en/.[2] M. Gori, G. Cappagli, A. Tonelli, G. Baud-Bovy, andS. Finocchietti, Devices for visually impaired people: Hightechnological devices with low user acceptance and no adapt-ability for children, Neuroscience & Biobehavioral Reviews,69(Supplement C), 2016, 79–88.[3] T. Nakamura, Quantitative analysis of gait in the visuallyimpaired, Disability & Rehabilitation, 19(5), 1997, 194–197.[4] A. Bhowmick and S.M. Hazarika, An insight into assistivetechnology for the visually impaired and blind people: state-of-the-art and future trends, Journal on Multimodal UserInterfaces, 11(2), 2017, 149–172. [5] M.C. Domingo, An overview of the Internet of Things forpeople with disabilities, Journal of Network and ComputerApplications, 35(2), 2012, 584–596. [6] D. Dakopoulos and N.G. Bourbakis, Wearable obstacle avoid-ance electronic travel aids for blind: A survey, IEEE Trans-actions on Systems Man & Cybernetics Part C, 40(1), 2009,25–35. [7] J.M. Batterman, V.F.Martin, D. Yeung, and B.N. Walker,Connected cane: Tactile button input for controlling ges-tures of iOS voiceover embedded in a white cane, AssistiveTechnology, 30(2), 2018, 91–99. [8] J.R. Terven, J. Salas, and B. Raducanu, New opportunitiesfor computer vision-based assistive technology systems forthe visually impaired, Computer, 47(4), 2014, 52–58. [9] R. Vel´azquez, Wearable assistive devices for the blind, LectureNotes in Electrical Engineering, 75, 2016, 331–349. [10] W. Elmannai and K. Elleithy, Sensor-based assistive devicesfor visually-impaired people: current status, challenges, andfuture directions, Sensors, 17(3), 2017, 565. [11] P.M. Lewis, L.N. Ayton, R.H. Guymer, et al., Advancesin implantable bionic devices for blindness: A review, ANZJournal of Surgery, 86(9), 2016, 654–659. [12] L. Renier and A.G.D. Volder, Sensory substitution devices(New York, USA: Oxford Handbooks, 2013). [13] G. Motta, T. Ma, K. Liu, et al., Overview of smart whitecanes: connected smart cane from front end to back end,in R. Velazquez (ed.), Mobility of visually impaired people(Cham: Springer, 2018), 469–535. [14] W. Zhang, Y. Lin, and N. Sinha, On the function-behavior-structure model for design, Proceedings of the CanadianEngineering Education Association, 2005, 1–8. [15] W. Zhang and J. Wang, Design theory and methodologyfor enterprise systems, Enterprise Information Systems, 10,2016, 245–248. [16] Z.M. Zhang, Q. An, J.W. Li, and W.J. Zhang, Piezoelec-tric friction–inertia actuator—A critical review and futureperspective, The International Journal of Advanced Manu-facturing Technology, 62(5–8), 2012, 669–685. [17] Y. Lin, Towards Intelligent Human–Machine Interactions:Human Assistance Systems (HAS), ASME Magazine SpecialIssue on Human-Machine Interactions, 2017, 139(06), 4–8. [18] D.I. Anderson, J.J. Campos, D.C. Witherington, et al., Therole of locomotion in psychological development, Frontiers inPsychology, 4(2), 2013, 1–7. [19] A. Mihailovic, B.K. Swenor, D.S. Friedman, S.K. West,L.N. Gitlin, and P.Y. Ramulu, Gait implications of visualfield damage from glaucoma, Translational Vision Science &Technology, 6(3), 2017, 23. [20] K.A. Turano, D.R. Geruschat, F.H. Baker, J.W. Stahl, andM.D. Shapiro, Direction of gaze while walking a simpleroute: persons with normal vision and persons with retini-tis pigmentosa, Optometry & Vision Science, 78(9), 2001,667–675. [22] noted that thecommon classification of ‘early’ blind may be a misnomeras even a year or two of visual experience can lead to braindevelopment akin to a late blind or indeed a sighted person.The late blind most often resembles the sighted in theirsensory and cognitive profile. For congenital blindness,they indeed have superior auditory memory abilities [23].The major global causes of vision impairments andblindness are uncorrected refractive errors, cataract, AMDand glaucoma. According to the International Classifica-tion of Diseases, the vision function is classified into fourbroad categories: normal vision, moderate vision impair-ments, severe vision impairments and blindness [1]. On thebasis of overall visual ability, the grades of vision functionconsist of five levels by the WTO in 1973, which are lowvision level 1, low vision level 2, low vision level 3, blindlevel 1 and blind level 2.The use of classification based on the type of vision im-pairments can provide a better guidance for the design anddevelopment of assistive devices. By counselling clinicalophthalmologists and examining literature, we summarizethe categorization in line with the type of vision impair-ments as follows: (1) decrease in the sensitivity of the light,(2) blurred vision (caused by retinal anomaly or refractiveerror), (3) vision loss and (4) total blindness. One eyedisease may lead to multiple kinds of vision impairmentsat the same time, so we generate the simulated images inthe computer and summarize them in Fig. 2. As shown inthe first line of Fig. 2, people with glaucoma in the early582stage lose their peripheral visual field, and then a tubularvision slowly appears as the disease deteriorates. The sec-ond line of Fig. 2 demonstrates that visual impairment ofAMD is mainly manifested as central vision loss. RP isan incurable eye disease, and the eyesight of persons withRP will get worse as the disease progresses (the third lineof Fig. 2). Uncorrected refractive errors can be correctedby the use of a diopter lens (the last line of Fig. 2). Thereason for the exclusion of cataract in Fig. 2 is that thecataract can be treated. These simulated images (Fig. 2)can provide the guidance for the design and developmentof assistive devices.4. Substitutive Sense for Visual PerceptionVision impairments will alter the perception mechanism ofVIP and blind people. Due to the total or partial absenceof visual perception function, VIPs and blind people willbe more dependent on other senses such as somatosenseand audition.Based on the literature survey, we summarize a treediagram to demonstrate the existing substitutive sensesfor visual perception (Fig. 3). VIPs can see the out-side world by means of the vision-enhancement techniques.One research group in the Harvard University focusedon vision-enhancement techniques and used them to ex-pand the visual field of VIPs [24], [25]. Hu et al. [26]attempted to develop a see-through glass associated withthreshold-based enhancement algorithm to assist the peo-ple with night blindness. The visual prosthesis, one of thesubstitutive senses, directly displays the feedback informa-tion on the visual cortex in the human brain by the useof phosphene phenomenon [27], [28]. We will not discussthis feedback form as it involves some issues of medicalresearches, which is beyond the purpose of this survey.Readers can refer to the literature [29] for more informationwith respect to visual prosthesis. Thermal feedback, oneof the somatosenses, can take advantage of temperaturefluctuation on the human body surface to remind usersof changes in the external environment. L´ecuyer [30] de-signed a virtual reality system for VIPs to explore virtualenvironments. They used thermal feedback generated by12 infrared lamps to simulate the virtual sun. Thermalfeedback is highly influenced by ambient temperature, and,therefore, it is difficult to be perceived by users in somecircumstances. Olfactory and gustation are two rare chem-ical feedback approaches, and they are seldom applied inassistive devices. VIPs and blind people mainly adopt theaudition and tactus to take in information from the outsideworld and then process it to shape a right worldview thathelps them understand life and make wise decisions. Themajority of assistive devices use the audition and tactusamong all feedback methods. In the following sections, wewill emphatically review these feedback ways.Spatial reference frames are of importance due to theirrelevance to navigation and mental mapping for VIPs andblind people [31]. We can refer to Hall’s extra-personalspace definition [32] to select the suitable substitutive sensefor visual perception. Figure 4 illustrates the sensing rangesof these substitutive senses at different spatial scales [33].Figure 3. Tree diagram of substitutive sense for visualperception.Figure 4. Hall’s extra-personal space definition with minorrevision for personal space.Furthermore, Tversky [34] concluded more complex andefficient spatial thinking models. In one of her reports, shestated that there was a mental space except for the externalspace. Mental space is constructed from what we perceive,aided by what we think and infer, in the service of actionin the world or imagined in the mind [35]. Spatial thinkingin mental space can help VIPs and blind people createrepresentations of a real-world space. More detailed workconducted by Pasqualotto et al. [36] showed differencesbetween congenitally blind and late-blind people in theirspatial reference frame preferences. Moreover, the sameresearch group gave visual-like exposure to those whocannot see a room and thus provided allocentric referenceframe information using auditory devices [37]. Hence, theunderstanding of mental space may be beneficial to thedesign of assistance devices.4.1 AuditionThe term audition is used to conclude all auditory percep-tion means in assistive devices. This summative term maybe incomprehensive, but it can be applied for referenceby relevant researchers and interested readers. The soundprocessing speed of VIPs and blind people is faster thanthat of sighted people [38]. Moreover, auditory memory583and retrieval abilities of congenitally blind people aresuperior to those of sighted people [39]. Survey resultsof the questionnaire showed that blind people of Iran aremore inclined to use audio media rather than other me-dia to access or utilize information [40]. Similar researchwas conducted by Kolarik et al. [41]. They found thatVIPs outperformed sighted people in three cases: (1) whenfollowing the conversation switched from one person to an-other, (2) when locating the multiple speakers and (3) whenseparating the speech from music.Findings of the previous literature evidence are inaccordance with the perceptual enhancement hypothesis,that is VIPs and blind people will attempt to develop theability of other senses to compensate for visual impairment [42], [43]. A recent survey concluded that complete blindpeople at an early stage show the superior performancein spatial hearing in the horizontal plane, but the perfor-mance in the vertical plane is unsatisfactory [44]. Thespatial sound resolution ability of blind people is relativelylow when they use the allocentric frame of reference. Be-sides, compared to early-onset blind individuals, late-onsetblind people perform better in terms of spatial hearing.This indicates that the early visual experience is of greatsignificance for the development of spatial hearing. Al-though VIPs and blind individuals exhibit better auditoryprocessing ability, their brain region related to languageprocessing is degraded [45]. This may be attributed tothe fact that they seldom participate in social activities.The improvement of audio ability of VIPs and blind peo-ple is targeted, and it requires lengthy time to learn howto perceive the outside world using audition instead ofvision.In Fig. 3, we classify the auditory feedback into twocategories viz. speech and nonspeech. The principle ofspeech feedback is to convert the ambient information intolinguistic information [46], and subsequently, VIPs andblind people receive speech instructions via the earphoneor speaker. Speech feedback is simple and intuitive, andthe user can understand it without any learning process.Nevertheless, in some situations, speech feedback takesa longer time to describe the surrounding circumstances.There is no doubt that the user will feel annoyed andirritated [47]. Furthermore, delays in receiving informationcan even cause some irreversible accidents. The nonspeechfeedback alerts the user using the music, environmentalsounds or some artificial sounds [48]. In recent years,investigators have designed a variety of nonspeech cuessuch as spindex [49], spearcons [50] and audemes [51] tomeet different application requirements. Although there isa learning process for nonspeech interface, this instructioncan quickly convey information to users, which can addressdeficiencies of speech feedback. Researches carried outby Hussain et al. have validated the previous statement [52]–[54].4.2 TactusTactus or haptic perception [55], one of the somatosenses,can be further separated into three parts, namely touchfeeling, vibration and electric stimulus feeling (Fig. 3).It is difficult to distinguish concepts of touch feelingand vibration. In our opinion, the touch feeling means thefeeling given by the texture of an object in contact whenwe stroke or touch this object. The vibration means thefeeling caused by external forces. Because the stimulationamount of touch feeling is less than that of vibration, fewinvestigators have applied the touch feeling as feedback inassistive devices. Our research group used an electricalcompass and a servo-driven pointer to develop an indoorlocalization system [56]. This system can give the directioninformation to the user with touch stimulation. In terms ofelectric stimulus feeling, it is arisen by electrical stimulationand can be used for a visual prosthesis. Like audition,tactus is also commonly used in a feedback interface forassistive devices.Heller et al. [57] systematically investigated the hapticpattern perceived by blind individuals. They stated thatthe tactus is a crucial sense, which can be used to substitutefor vision. Results of the experiments conducted by Occelliet al. [58] show that people with early-onset blindnessreflect greater haptic sensitivity than the sighted. Theyalso validated the hypothesis that people losing visionearly can recognize objects by their haptic perceptionregardless of spatial transformations. Picard et al. [59]invited children, adolescents and young adults to comparetheir haptic memory capacities. The result demonstratedthat the haptic memory ability is an age-related skill.Carpio et al. [60] found that there is no significantdifference between blind and sighted school students incontent acquisition or aesthetic appreciation of images.This indicates that the blind people can experience theworld through their haptic perception and eventually reachthe same cognitive level of sighted people. The researchfindings of Puspitawati et al. [61] showed that, comparedto VIPs with slight visual impairment, the people withtotal blindness have the faster speed of processing hapticinformation. This may further illustrate that, for VIPs andblind people, dependence on tactile perception increaseswith the severity of visual impairment. Therefore, afeedback module of an assistive device can be designed tomeet the needs of the people with varying degrees of visualimpairment.5. Assistive Devices for Blind and VisuallyImpaired PersonsAssistive technology, one of the information accessibilitytechnologies, has attracted considerable attention world-wide owing to its remarkable social significance [4], [62].Over the past decade, a variety of assistive devices havebeen developed for functional assistances of VIPs and blindpeople. We summarize these devices in the following sec-tions. In Tables 1 and 2, although several assistancedevices offer the same functionality, there exist differ-ences in types of sensors used, feedback modes, hardwareframeworks and data processing algorithms. Validationexperiments are important for assistance devices, and,therefore, investigators design different experiments, aim-ing to verify their feasibilities and reliabilities of completingthe specific task.584Table 1Summary of Assistive Canes for VIPs and Blind People about Sensors Used and Feedback Producedas well as Validation MethodsStudy Sensor Feedback Functionality ValidationGupta et al.[76]Ultrasonic sensor;GPS receiverAudition Navigation Tested in computerFan et al. [86] Ultrasonic sensor;GPS receiver;RGB-D cameraAudition;vibrationNavigation Tested inoutdoor open areaSilva andDias [90]Ultrasonic sensor;inertia measurementunitAudition Obstacle detection Tested by obstacles inthe pathKumar et al.[75]Ultrasonic sensor Audition Obstacle and potholesdetectionTested by 10volunteersMajeed andBaadel [73]RGB camera with270◦lensAudition Facial recognition Tested in databaseSatpute et al.[91]Ultrasonic sensor;GPS receiverAudition;vibrationNavigation; obstacledetectionNoneRizzo et al.[92]Adaptive mobilitydevicesVibration Drop-off and obstacledetectionTested by 6 adultsShah et al.[77]Ultrasonic sensor Audition Navigation; obstacleand potholesdetectionNoneSharma et al.[78]Ultrasonic sensor Audition;vibrationStatic and dynamicobstacles detectionTested in real-timeenvironmentKrishnanet al. [81]Ultrasonic sensor;GPS receiver;RGB cameraAudition Navigation; obstacledetectionTested in databaseBolgiano andMeeks [70]Laser Audition;vibrationObstacle detection NoneSugimotoet al. [93]Ultrasonic sensor;GPS receiverVibration Navigation; obstacledetectionTested in presetscenariosWankhadeet al. [94]Infrared sensor Audition;vibrationObstacle detection NoneKassim et al.[88]RFID network;digital compassAudition Indoor navigation Tested by the mobilerobot and humansubjectVera et al. [71] RGB camera; laserpointerVibration Obstacle detection Tested by 16 sightedpersonsAlwis andSamarawickrama[95]Ultrasonic sensor Audition;vibrationObstacle detection NonePisa et al. [89] FMCW radar None Obstacle detection Tested by obstaclewith differentdistancesBuchs et al. [80] Infrared sensors Audition;vibrationWaist-up obstaclesdetectionTested by the trainedblind participants(continued )585Table 1ContinuedPinto et al. [96] Ultrasonic sensor;GPS receiverAudition;vibrationObstacle detection Tested by obstaclewith differentdistancesYe et al. [74] 3D camera Audition Obstacle detection;pose estimationValidated by datafrom various indoorscenesDang et al. [72] Linear laser;RGB camera;inertial measure-mentunitAudition Obstacle detectionand recognitionValidated by theobstacles with variousheights, types,distancesNiitsu et al. [82] Ultrasonic sensor;infrared sensor;compass; tri-axialaccelerometerAudition(boneconduction)Obstacle detection Examined in 20 timesby 1 userTakizawa et al.[87]Kinect sensor Vibration Object recognition Tested by 2blindfolded personsJeong and Yu [97] Ultrasonic sensor Vibration Obstacle detection Tested by 4 blindfolded and 10 blindpersonsBay AdvancedTechnologies Ltd.[79]Ultrasonic sensor Audition;vibrationObstacle detection NoneScherlen et al. [83] Infrared sensor;brilliance;water sensorsNone Object recognition NoneKim et al. [84] Ultrasonic sensor;colour sensor;Cds photo resistorAudition;vibrationObstacle detection Validate the usabilityby 7 types of criteriaShim and Yoon[85]Ultrasonic sensor;infrared sensor;contact sensor(two antennas)Audition Obstacle detection NoneNearly all assistive devices listed later belong to SSDs.SSDs have been around for 40 years. The vibrotactilesensors were usually placed on the back to develop as-sistive device [63]. Subsequently, some investigators putan artificial sensor on the tongue [64]. The latter is theantecedent to the commercial BrainPort that is cited inTable 3. More recent, and highly promising, is the audi-tory device The vOICe [65]. It has been studied exten-sively for localization [66] and object identification [67].There have been numerous neuroscience studies showingthat The vOICe activates visual cortex in the blind asthey perform tasks with images – suggesting that one cantruly ‘see’ with the sound output of the device [68]. Thesedevices in early stages have been widely validated in vari-ous tasks, settings and user groups. Thus the success anduse are more easily ascertained than many devices cited inTables 1 and 2.5.1 Vision Substitution by Assistive CanesThe use of assistive cane is critical in reducing the risk ofcollision, which can help VIPs and blind people to walkmore confidently. Table 1 summarizes some assistive canesdesigned for VIPs and blind people.In general, an assistive cane is developed by mountingsensing and feedback modules on a classic white cane. Sub-sequently, the assistive cane acquires information with re-spect to surroundings and transmits raw or (pre-)processeddata to users via predefined feedback approach [69].Bolgiano and Meeks [70] first put a laser into a caneto detect obstacles in the traveling path, and audio andvibratory signals were available when VIPs and blindpeople approach the obstacle.Vera et al. [71] used an RGB camera and a laserpointer in combination to develop a virtual white cane for586Table 2Summary of Assistive Glasses for VIPs and Blind People about Sensors Used and Feedback Producedas well as Validation MethodsStudy Sensor Feedback Functionality ValidationSadi et al. [98] Ultrasonic sensor Audition Obstacle detection Tested in lab conditionsKassim et al. [99] Ultrasonic sensor Audition;vibrationObstacle detection Validated by blind spotevaluation experimentYi and Tian [100] RGB camera Audition Text reading fromnatural sceneTested by 10 blind personsEverding et al.[102]RGB camera Audition Obstacle detection Tested by 2 experiments(11 and 5 persons,respectively)Wang et al. [103] RGB camera Audition Navigation; wayfindingEvaluated in databasesHassan andTang [101]RGB camera Audition Text recognition Tested by severalsample textsPundlik et al.[104]Google Glass Vision Smartphone screenmagnificationEvaluated by 8 sighted and4 visually impaired personsNeto et al. [105] RGB-D camera 3D audition Face recognition Validated in databases andby both blindfolded andvisually impaired usersStoll et al. [106] RGB-D camera Audition Indoor navigation Validated by 2 performancemetrics i.e. travel timeand errorHicks et al. [107] RGB-D camera Vision Scene recognitionand analysisTested by 4 sighted and12 visually impairedparticipantsWu et al. [108] Pico projector;optical lensesVision Vision enhancement In simulated stageLan et al. [112] RGB camera Audition Public signrecognitionTested by some commonpublic signsHu et al. [26] RGB camera Vision Night visionenhancementEvaluated on custom-builtdatabasesVIPs and blind people. In their device, the RGB camerain smartphone captures the laser beam reflection, and thedistance from the cane to the obstacle is calculated usingactive triangulation. Through the personalized vibrationgenerated by smartphone, the user will be warned if pos-sible obstacles are located in traveling path. Furthermore,the magnitude of vibration is applied for the quantizationof distance. Results of validated experiments demonstratedthat the travel time of virtual white cane is less than thatof the traditional white cane. The assistive cane equippedwith the point laser may fail to detect the potholes and theobstacles in small and tiny size.Dang et al. [72] proposed an assistive cane using alinear laser, an RGB camera and an inertial measurementunit as sensors to classify the type of obstacle and estimatethe distance from the obstacle to the user. The inertialmeasurement unit is an electronic device that measures auser’s angular rate to determine spatial coordinate frames.The inertial sensor tracks the position of laser stripe in thenavigation coordinate frame, and the subsequent analysisof the laser point coordinates in regard to the originallaser stripe can divide obstacles into walls, stairs andblocks. The information gathered is transmitted to theuser via a simple nonspeech feedback. The performanceof this assistive cane is easily influenced by the strongillumination, thereby limiting the application scope of thisassistive cane.Due to the limited detecting or scanning range whenusing the laser as a sensor, we can only detect objectslocated in the region where the laser illuminates. To over-come this shortcoming, we need to leverage spatial infor-mation recorded by RGB camera. Majeed and Baadel[73] integrated an RGB camera with 270◦lens into anassistive cane, thus allowing us to capture much of envi-ronmental information. The proposed smart cane can helpVIPs and blind people to dodge obstacles placed at the587Table 3Summary of Some Assistive Glasses Which Are Available on the MarketName Company LaunchdateFunctionality Brief descriptionGoogleGlass [113]Google Inc. 2012 Direction recognition It is equipped with the RGB camera andgyroscope and has all the functions ofmobile phone. As feedback, it can transmit theinformation to the user via the bone-conductionearphone and display screen. Google Glass is notdesigned for visual assistance of the VIPs andblind people, but we can do secondarydevelopment based on iteSight 3[114]eSight Co. 2017 No specific functiondescriptionIt is mainly designed for the individuals who arenot completely blind. A high speed and qualitycamera is loaded in this glass to capture whatthe user is browsing. The obtained videos arefirst subjected to image-enhancement processingand then shown in two OLED screens. From thedisplay way, eSight 3 is something like the virtualreality display deviceOrCam[115]OrCamTechnologiesLtd.2015 Text reading; facerecognition; product andmoney identificationOrCam mainly consists of the RGB camera andportable computer. It can be fixed on any eyeglassframe and informs the user outside informationvia the audio signalsEnchroma[116]Enchroma,Inc.2013 Colour contrastenhancementEnchroma is designed for the colour blindness.It does not leverage any digital processingtechnology. Enchroma alters the original wavesusing the specially designed lenses to help thepersons of colour vision deficiency see thereal colourIntoer [117] HangzhouKR-VISIONTechnologyCo., Ltd.2017 Obstacle detection;scene, money, puddle,staircase, traffic signal andzebra crossing recognition;navigationIt uses the infrared binocular camera to recordthe environmental information illuminated by thenatural and structural light. It produces thespecial encoded stereo to inform the user viathe bone-conduction earphoneBrainPort rV100 [118]Wicab, Inc. 2015 Obstacle detection;scene recognitionBrainPort rV100 is mainly composed of the RGBcamera mounted on a pair of glasses, hand-heldcontroller and tongue array containing 400electrodes. The outside information is convertedinto electrical signals that are sent to the tonguearray on the tongue of the user. Before using thisdevice, there is a training phasemaximum distance of 10 m, and moreover, it can be utilizedto recognize different persons’ faces.Ye et al. [74] used a three-dimensional (3D) camera asa sensor to develop an assistive cane, aiming to estimatingpose and recognizing obstacle. The type of 3D cameraused in their study is SwissRanger SR4000, which is asmall-sized (65 × 65 × 68 mm3) 3D time-of-flight camera.The speech feedback module serves as the communicationmedia between human and cane. This assistive canewas validated by data collected from a variety of indoorscenes. Results demonstrated that the proposed canecould estimate pose and recognize objects with satisfactoryperformance. In their article, developers stated that theywere working with orientation and mobility specialists aswell as blind trainees of the World Service for the Blind inArkansas to refine functions of their assistive cane.Apart from the laser and RGB camera, the ultrasonicsensor is one of the widely used sensors in assistive de-vice owing to its high-price/performance ratio. The ultra-sonic sensor emits ultrasonic waves in the air, and thenthe reflected sound is received by the sensor. This sen-sor is always applied for detecting objects and measuringdistance. Kumar et al. [75] developed an ultrasonic canefor aiding the blind people to navigate. This ultrasonic cane588is equipped with three pairs of ultrasonic trans-receivers,thus enabling the blind people to know aerial and groundobstacles as well as potholes in front of them via audiowarnings. The maximum working range of this ultrasoniccane is 1.5 m, which is much less than that of the canedeveloped by Majeed and Baadel.Gupta et al. [76] used an ultrasonic sensor and aGPS receiver together in classic canes. The addition ofGPS module allows VIPs and blind people to travel out-doors using satellite network. Audio signals generated byPygame module, a programming module to create gamesand animations, were used as the feedback to remind users.The range of distance measured by the attached ultrasonicsensor in cane is from 0.05 to 2 m, which is slightly largerthan that of the device developed by Kumar et al.Several investigators reported that they used an ultra-sonic sensor to establish assistive canes. Shah et al. [77]arranged four ultrasonic sensors in a stick. Among theseultrasonic sensors, three ultrasonic sensors are applied forobstacle detection and the remaining one for pothole de-tection. Their experimental results showed that maximumdetection distances of the ultrasonic stick were 1.45, 0.6and 0.82 m when the obstacles located on the front, left-front and right-front, respectively. A similar smart stickwas reported by Sharma et al. [78]. They stated that thissmart stick was able to perceive obstacles of any height infront of or slightly sideways to users. Bay Advanced Tech-nologies Ltd. [79] developed an ultrasonic sensor-basedassistive cane named ‘K’ Sonar, and this cane was availableon the market.Infrared sensor is also a very popular sensor selectedby investigators for the development of the smart cane. Itis an electronic sensor, which works by using a specific lightsensor to detect a selected light wavelength in the infraredspectrum. This sensor can detect infrared light radiatingfrom objects in its view field to detect object and measuredistance. Buchs et al. [80] mounted two infrared sensorson a white cane. One infrared sensor was parallel to thehorizontal plane while the other was approximately 42◦with respect to the horizontal plane. Such arrangement ofinfrared sensors allows this smart cane to detect waist-upobstacles. The detection range of this cane is only 1.5 m.The addition of RGB camera can increase the detectionrange of developed smart cane. Krishnan et al. [81] appliedan ultrasonic sensor and an RGB camera in the sensingmode of smart cane, and the testing result demonstratedthat the maximum detection range was 3 m.Infrared sensor is usually used in conjunction withother types of sensors to form the multi-mode sensingarray. Niitsu et al. [82] put four sensors viz. ultra-sonic sensor, infrared sensor, compass and tri-axial ac-celerometer together on a classic cane. In this smart cane,a bone-conduction headphone was used for human–caneinteraction in such a way that the feedback informationcould be passed to users unobtrusively. This assistivecane based on multi-mode sensing array can achieve thedetection accuracy of 100% for wide obstacles, crossingand approaching persons, while 95% for thin obstacles.It should be noted that the bone conduction may haveinterference with several brain functions. Scherlen et al.[83] leveraged an infrared sensor, a brilliance sensor and awater sensor in combination to develop a smart cane named‘RecognizeCane’, which was capable of recognizing objectsand their constituent materials. At present, four mate-rials, namely metal (steel), glass, cardboard and plastic,can be successfully recognized. Also, the ‘RecognizeCane’can distinguish the zebra crossing and water puddle usingbrilliance and water sensors, respectively. The brilliancesensor was also adopted by Kim et al. [84] in their smartcane to measure environmental brightness information. Todetect obstacles in front accurately, two antennas used asthe contact sensors, an ultrasonic sensor and an infraredsensor, were attached to a sensing unit of a smart cane byShim and Yoon [85]. With the aid of contact sensors, thissmart cane can effectively complement for ultrasonic andinfrared sensors for detection of short-range obstacles.Fan et al. [86], respectively, applied an RGB-D cameraand an ultrasonic sensor to acquire dynamic visual envi-ronmental information and detect obstacles around. TheRGB-D camera is able to obtain synchronized videos ofboth colour and depth. To implement outdoor navigation,they added a GPS module into the sensing unit. Results ofvalidation experiments conducted in the open area demon-strated that the assistive cane installed in this sensingunit can help VIPs and blind people to travel outdoorssafely. However, this cane cannot process the image datacaptured by RGB-D camera in real time. Takizawa et al.[87] also used an RGB-D camera in their sensing unit,and they called this developed cane as the Kinect cane.By the use of RGB-D camera, the Kinect cane can rec-ognize different types of indoor obstacles, including chair,staircase and floor. Two blindfolded persons were invitedto test the performance of proposed cane, and obtainedresults showed that the average search time by Kinectcane was significantly shorter than that by classic whitecane.Some other sensors are also used in sensing unit ofassistive cane. Kassim et al. [88] mounted radio frequencyidentification (RFID) transponders on the floor and theninstalled an RFID reader at the end of cane. RFID is atechnology that records the presence of an object usingradio signals. When walking, the RFID reader reads RFIDtags arranged on the floor in advance, and the addressesof these tags are sent for map processing. Subsequently,the auditory interface emits voice commands such as 90◦turn left after digital compass calibration. Results ofsmall-sample experiment containing two human subjectsshowed that the RFID-based smart cane has a potentialto help VIPs and blind people to walk independently inindoor environments. Frequency-modulated continuouswave (FMCW) radars and antennas were housed in a classicwhite cane by Pisa et al. [89] for obstacle detection. Theresult showed that this cane could receive reflections froma metallic panel up to 5 m. FMCW radar is a short-rangemeasuring radar set capable of determining the distance ofobject in its view field.The assistive cane belongs to the portable assistivedevice. It is compact and lightweight, thus it is easilytaken by users. Despite these advantages, the assistivecane needs to interact with users constantly.5895.2 Vision Substitution by Assistive GlassesAssistive glass is one of the wearable assistive devices. InTable 2, some assistive glasses designed for VIPs and blindpeople are presented. The assistive glass in general fixessensing and feedback modules on a classic glass. Unlike theassistive cane, the assistive glass in general uses the visualsignal as the feedback for users.Sadi et al. [98] embedded an ultrasonic sensor ina traditional glass to develop a smart glass for walkingassistance. The sensing region of attached ultrasonic sensorcovers 3 m distance and 60◦angle. Processed informationthat corresponds to the distance of obstacle is sent tousers via audio signals. Validation experiments carried outin the lab showed that detection accuracies of proposedglass were all beyond 93%. Kassim et al. [99] comparedthe performance of three sensors inclusive of an ultrasonicsensor, an infrared sensor and a laser range by takingseveral metrics such as accuracy, size and weight intoaccount. Finally, they selected ultrasonic sensors for thedevelopment of their assistive glass. As feedback, twowarning modes viz. audition and vibration were designedin their device and users could switch the warning modebased on her or his preference or environment around.Kassim et al. gave an example: when a user comes to anoisy environment such as bus terminal or market, he or shecan use the vibration mode instead of auditory mode, thusallowing the audio sense to hear ambient sounds. A blindspot evaluation experiment demonstrated the effectivenessof proposed smart glass.Except for the ultrasonic sensor, the RGB camera isalso commonly used in the sensing unit of assistive glass,and there are four publications that used RGB cameras toobtain outside information in Table 2. Yi and Tian [100]applied an RGB camera equipped on a glass for assistingVIPs to access text information in their daily lives. Theyreported that the further study should focus on improvingthe detection accuracy of scene text hidden in clutteredbackground. One possible solution for this is to exploremore effective feature representations to establish morerobust models, and subsequently, we write the obtainedmodel into a processing unit of smart glass. A similarresearch was conducted by Hassan and Tang [101]. Theirsmart glass is only suitable for recognizing the text onhardcopy materials. Inspired by the principle of humanvisual perception, Everding et al. [102] deployed two RGBcameras on a classic glass to imitate two human retinas.The performance of their smart glass is satisfactory whensubjects are static. For moving tests, the performance isstill unknown. Wang et al. [103] embedded a saliency mapalgorithm into an RGB camera-based smart glass for thedetection of indoor signs. Experimental results on theirdatabases containing indoor signs and doors showed theusability of their glass. The output information of fourabovementioned publications is all delivered to users usingthe audio form.Pundlik et al. [104] did the secondary development forGoogle Glass to magnify the screen content of smartphone,thereby helping VIPs to easily access information displayedon the screen. They invited eight sighted and four VIP toemploy calculator and music player apps on smartphonewith the aid of proposed glass and built-in screen zoom appof phone. Comparison results showed that the assistiveglass based on Google Glass outperformed the built-inscreen zoom software in improving the ability of VIPs toread screen content.As the RGB-D camera can acquire both colour anddistance information, it has been widely used in assistiveglass. Neto et al. [105] directly tied a Microsoft Kinectsensor to the user’s head, and this assistive device informedthe user outside information via 3D audio signal. Thishardware architecture is somewhat abrupt. The similarhardware framework was adopted by Stoll et al. [106]. Af-ter validation experiments on 21 blindfolded young adultswith 1-week interval, they deemed that this system waspromising for indoor use, but still inefficient for outdoorscenarios. Hicks et al. [107] improved the hardwarearchitecture and made it more like glass. They convertedscene data obtained by RGB-D camera into a depth mapthat nearby objects were rendered into brighter. Subse-quently, processed depth images were displayed on twoOLED screens. With the validation experiment, for VIPs,the average detection distance was approximately 3 m.Hence, further work needs to be done for increasing thedetection distance of objects. The possible solution to thisis to change the mechanical architecture of the glasses asthe see-through display.Wu et al. [108] designed a compact see-through near-eye display system that could be used for the personswho are hyperopic. Unlike most assistive devices, thissystem does not use any digital processing technologies.The main principle of this system is that the light emittedby objects at a distance goes through preset asphericalsurfaces, and the user can see the relatively clear imageof object. According to their simulated results, the finalimage provided for users is nearly identical to the originalimage. However, the reduced brightness and distortionin image corners are also observed. This glass that canenhance vision ability of people with presbyopia is still indesign phase.Hu et al. [26] attempted to develop a see-throughglass to assist the persons who suffer from the nyctalopia.They first analysed the vision model of night blindness andthen derived the relationship between luminance levels andRGB grey scale of the image to develop the enhancementalgorithm. Experimental results showed that the bright-ness of raw dark image could be significantly improved bythe use of proposed algorithm. After the spatial distanceand camera lens calibrations, the processed image is ableto perfectly align with the view seen by users.Apart from previous assistive glasses which are still atan engineering or concept stage, several assistive glasseshave been available on the market. These commercializedglasses for visual assistance are summarized in Table 3.Google Glass is usually used for the secondary develop-ment, and many assistive glasses not listed in our surveyare developed based on Google Glass [109], [110]. Target-ing ends of eSight 3 are VIPs, and therefore, developersplace two OLED display screens in front of user’s eyesto play processed videos. Sensors of OrCam and Intoer590Table 4Summary of Some Assistive Devices with Various FormsStudy Modality Sensor Feedback Functionality ValidationWang et al.[119]None RGB-DcameraAudition Detection of stairs,pedestrian crosswalks andtraffic signsevaluated on databasesSatue andMiah [120]None UltrasonicsensorNerve stimulation;audition; vibrationObstacle detection Tested in predefinedenvironmentsSekhar et al.[121]None StereocamerasAudition Obstacle detection Compared with the othersystemsRao et al.[122]None Laser device;RGB cameraNone Pothole and unevensurface detectionValidated by theperformance metricGharani andKarimi [123]None RGB camera None Context-aware obstacledetectionCompared with the othertwo algorithms usingdifferent performancemetricsPattanshettiet al. [128]Hat Ultrasonicsensor;GPS receiver;RGB cameraAudition; vibration Currency recognition;obstacle detection;NavigationNoneReshma [125] Belt UltrasonicsensorAudition Obstacle detection Tested by 4 blindfolded personsWattal et al.[126]Belt UltrasonicsensorAudition Obstacle detection Compared the measuredand actual distance andposition of obstacleMocanu et al.[127]Belt Ultrasonicsensor; RGBcameraAudition Obstacle detection andrecognitionTested by 21 visuallyimpaired subjectsFronemanet al. [138]Belt UltrasonicsensorVibration Obstacle detection Evaluated by variouscommon static householdobstaclesBhatlawandeet al. [129]Bracelet UltrasonicsensorAudition; vibration Way-finding; obstacledetectionTested by 2 blindfoldedpersonsRangarajanand Benslija[130]RoboticdogForce sensor;RGB cameraAudition Obstacle detection;word recognitionTested on flat groundand slopeLin et al.[131]Smartphone RGB camera Audition Obstacle detection andrecognitionTested by 4 visuallyimpaired personsLee et al.[132]Jacket Ultrasonicsensor;GPS receiver;RGB camera;magneticcompasssensorAudition; vibration Navigation; obstacledetectionTested with various deviceconfigurations in differentenvironmentsKim andSong [133]Wheelchair UltrasonicsensorNone Obstacle detection Tested at different movingspeedsAltaha andRhee [137]Cane;jacket; gloveUltrasonicsensor3D audition Obstacle detection Tested by the blind person(continued )591Table 4ContinuedMekhalfiet al. [139]Jacket Laser sensor;RGB cameraAudition Indoor scene description Tested in databasesBhatlawandeet al. [135]Bracelet;BeltUltrasonicsensor; RGBcameraAudition; vibration Obstacle detection Tested by 15 trained blindpersonsSivagamiet al. [136]Glasses; belt UltrasonicsensorAudition Obstacle detection Tested by the blindfoldedpersonsWu et al.[140]WheeledrobotsUltrasonicsensor; RGBcamera;RFID readerNone Indoor navigation Tested on the predefinedpathSpiers andDollar [141]Hand-heldcubeUWBtransmitterShape-changingtactusIndoor navigation Tested by the sightedpersonsFang et al.[134]Flashlight RGB camera;structuredlightAudition Obstacle detection Evaluated on custom-builtdatabasesare an RGB camera and an infrared binocular camera,respectively. These two products both use the audio signalas feedback to inform users. Enchroma is designed for theassistance of colour blindness. Like the study conductedby Wu et al. [108], this product achieves its functional-ity (here is colour contrast enhancement) using a speciallydesigned lens, instead of any digital processing technolo-gies. The sensing unit of BrainPort rV100 is similarto above-mentioned products, and the only difference isthat it leverages the electric stimulus feeling as feedback.Developers of BrainPort rV100 consider that the tongueis extremely sensitive to electric stimulus, and hence, theyplace the tongue array which contains 400 electrodes inthe user’s tongue. This indicates that the resolution ofBrainPort rV100 is 20×20 pixels. The intensity of stimu-lation represents the pixel intensity of the image obtainedby RGB camera. In addition, due to the low resolution oftongue array, the background of the raw image requires tobe eliminated [111].5.3 Vision Substitution by Other Forms of AssistiveDevicesTable 4 summarizes some assistive devices with variousforms except for canes and glasses.Several investigators only provide a core component ofassistive device. By the use of an RGB-D image, Wanget al. [119] developed an imaging processing algorithm-based Hough transform for detection and recognition ofstairs, pedestrian crosswalks and traffic signals. Resultstested on their RGB-D databases showed the effectivenessof this system. Satue and Miah [120] applied an ultra-sonic sensor to detect obstacles and then combined theelectric stimulus, audition and vibration to warn the blindpeople of dangerous situations. As feedback, they placedthe nerve stimulator unit on the wrist, and this unit wouldgive an electric shock below the safe limit of human nervestimulation according to the distance of obstacle. Sekharet al. [121] used a real-time stereo vision algorithm writtenin FPGA to detect obstacles. A matching algorithm calledzero-mean sun of absolute differences can maximize thehardware utilization, and therefore, their system is appli-cable to real-time applications. Rao et al. [122] combineda laser and an RGB camera in their assistive system to re-alize the pothole and uneven surface detection. From theirstudy, we find that the laser can be served as the structurallight for detecting various obstacles. Gharani and Karimi[123] calculated the optical flow between two consecutiveRGB images and extracted feature points based on thetexture of object and movement of the user. Experimentalresults showed that the combined use of optical flow andpoint track algorithms was capable of detecting both mov-ing and stationary obstacles which were close to the RGBcamera.There existed the assistive devices in the othermodalities:Belt is a widely used modality for assistive device [124].Reshma [125] furnished five ultrasonic sensors around thebelt. This spatial arrangement of sensors allowed us todetect obstacles within the circle of 5 m in diameter.A similar assistive belt was reported by Wattal et al.[126] and the maximum detection distance was also 5 m.Mocanu et al. [127] used one RGB camera and fourultrasonic sensors in their visual assistive belt. A totalof 21 VIPs were involved in the evaluation experiment,and results demonstrated that the developed assistive beltcould recognize both static and moving objects in highlydynamic urban scenes. Besides, each subject expressed agood experience.Pattanshetti et al. [128] developed an assistive hat,which consisted of an ultrasonic sensor and an RGBcamera for obstacle detection and currency identification,592respectively. To achieve the outdoor navigation, theyleveraged a GPS module in mobile phone.Bhatlawande et al. [129] developed an ultrasonicbracelet for independent mobility of VIPs and blind people.With on-demand hand movements, this bracelet can warnthe user of the obstacles in the range from 0.2 to 6 m.Alerting signals were then sent to users via audition andvibration.Rangarajan and Benslija [130] reported a voice recogni-tion robotic dog that could guide VIPs and blind people tothe destination avoiding obstacles and traffic. This roboticdog had been successfully tested on the flat ground andslope. Lin et al. [131] directly used a built-in RGB cameraof smartphone to detect and recognize obstacles. However,the recognition accuracy of obstacle in their study was only60%. In the real world, this is insufficient for VIPs andblind people to avoid obstacles around them.Lee et al. [132] put an ultrasonic sensor array, a GPSreceiver, an RGB camera and a magnetic compass sensoron the jacket to help VIPs and blind people to traveloutdoors. This assistive jacket had been tested with variousdevice configurations in different environments, and resultsdemonstrated that the sensor and receiver network had apotential ability to guarantee the safe outdoor navigation.Kim and Song [133] extended the functionality of aclassic wheelchair by adding multiple ultrasonic sensors,and the wheelchair can therefore execute efficient obsta-cle searching. The excellent performance had been ob-served when the updated wheelchair was tested at differentmoving speeds.An assistive flashlight was designed by Fang et al.,who used an RGB camera and a structured light generatedby a laser array to detect obstacles [134]. The laser of highrefresh rate was used to achieve a visual bifurcation effectso that people around could not perceive the laser light butthe camera could capture it. Therefore, the flashlight canoperate in an unobtrusive pattern.To further improve the performance of assistive device,some investigators simultaneously used several modalitiesof assistive devices to reach the specific assistive purposes.Bhatlawande et al. [135] installed an RGB camera and anultrasonic sensor on a belt and a bracelet, respectively, forassisting the blind people in walking. Based on results ofevaluation experiment with 15 blind people, the dual-modeassistive device exhibited excellent performance: 93.33%participants expressed satisfaction, 86.66% comprehendedits operational convenience and 80% appreciated the com-fort of the system. Sivagami et al. [136] also developeddual-mode assistive devices containing two modalities viz.glasses and a belt for VIPs and blind people to travel underunknown circumstances. Altaha and Rhee [137] proposedthree different modalities viz. jacket, glove and cane forobstacle detection. They arranged three ultrasonic sensorson the front, left and right sides, respectively, thus allow-ing us not only to detect the presence of nearby objectsbut also to measure the distance of objects from users.We suggest that they can in future use these three assistivedevices in combination to increase the detection range anddistance.6. Conclusion and ProspectiveAlthough numerous assistive devices are available, theyare not yet effectively adopted by VIPs and blind people.One reason is that these assistive devices can only act ina restricted spatial range due to their limited sensors andfeedback modes. The other reason is that the performanceof these assistive devices is not effectively validated. Asshown in the aforementioned tables, in many cases, onlyblindfolded sighted subjects were invited to validation ex-periments. Actually, cognitive strategies observed in VIPsand blind people are significantly different from those inblindfolded sighted subjects.In this section, we will next discuss three prospectivesfor assistive devices to conclude this survey: (1) increasethe diversity of input and output information to guaranteethe reliability of assistive device, (2) develop the assis-tive device based on perception mechanism and behaviourpattern of VIPs and blind people and (3) design morereliable experiments to validate the feasibility of assistivedevice.The diversity of feedback can increase the reliability offinal assistive devices. The multimodal feedback, includ-ing audition, thermal and vibration was embedded intothe virtual reality system, which allows VIPs and blindpeople to explore and navigate inside virtual environments[30]. Simultaneously, the use of sensor fusion frameworkfor assistive device allows us to obtain more importantinformation about the surrounding environment. Rizzoet al. [142] found that the depth information extractedfrom a stereoscopic camera system could ignore specificpotential collision hazards, and the addition of infraredsensors could offer a reliable distance measurement to re-move this inconsistency of depth inferred from stereo im-ages. Hence, for the specific task, if used sensors giveinconsistent measurements, the alternate sensing modalitycan be chosen to remedy this inconsistency.Study of changes in the connectivity of the functionalareas of the human brain can help us understand thechange in perception mechanism of VIPs and blind people[143]. Because congenitally blind people rely more on audi-tion or tactus information, the connectivity of multisensorybrain areas of them will be more complicated [144]. There-fore, the introduction of brain imaging is essential for thedesign of assistive devices. Luckily, there are some reviewsavailable in a recent special issue of ‘Neuroscience andBiobehavioral Reviews’ that cover the spectrum of SSDsand their relevance for understanding the human brain(http://www.sciencedirect.com/science/journal/01497634/41). In addition, we can develop better assistive devicesaccording to the idea of bionics [145].Currently, the performance of assistive devices is rarelyor inadequately validated by VIPs and blind individu-als. As cognitive strategies of VIPs and sighted peopleare significantly different, it is not guaranteed that theperformance validated by sighted blindfolded people rep-resents that by VIPs and blind people [69]. Therefore, it isvery necessary to invite numerous VIPs and blind peoplefrom different blind associations to test the performanceof developed assistive device. Furthermore, real-world593scenarios are far more complicated, and testing environ-ments should fully cover any possible application scenario.AcknowledgementThis work was sponsored by the Shanghai Sailing Pro-gram (No. 19YF1414100), the National Natural ScienceFoundation of China (No. 61831015, No. 61901172), theSTCSM (No. 18DZ2270700), and the China PostdoctoralScience Foundation funded project (No. 2016M600315).The authors would also like to acknowledge Ms. HuijingHuang, Ms. Shuping Li, and Mr. Joel Disu for providingassistance with the English language revision.References[1] World Health Organization, Visual impairment and blindness(2017). Available from: http://www.who.int/mediacentre/factsheets/fs282/en/.[2] M. Gori, G. Cappagli, A. Tonelli, G. Baud-Bovy, andS. Finocchietti, Devices for visually impaired people: Hightechnological devices with low user acceptance and no adapt-ability for children, Neuroscience & Biobehavioral Reviews,69(Supplement C), 2016, 79–88.[3] T. Nakamura, Quantitative analysis of gait in the visuallyimpaired, Disability & Rehabilitation, 19(5), 1997, 194–197.[4] A. Bhowmick and S.M. Hazarika, An insight into assistivetechnology for the visually impaired and blind people: state-of-the-art and future trends, Journal on Multimodal UserInterfaces, 11(2), 2017, 149–172.[5] M.C. Domingo, An overview of the Internet of Things forpeople with disabilities, Journal of Network and ComputerApplications, 35(2), 2012, 584–596.[6] D. Dakopoulos and N.G. Bourbakis, Wearable obstacle avoid-ance electronic travel aids for blind: A survey, IEEE Trans-actions on Systems Man & Cybernetics Part C, 40(1), 2009,25–35.[7] J.M. Batterman, V.F.Martin, D. Yeung, and B.N. Walker,Connected cane: Tactile button input for controlling ges-tures of iOS voiceover embedded in a white cane, AssistiveTechnology, 30(2), 2018, 91–99.[8] J.R. Terven, J. Salas, and B. Raducanu, New opportunitiesfor computer vision-based assistive technology systems forthe visually impaired, Computer, 47(4), 2014, 52–58.[9] R. Vel´azquez, Wearable assistive devices for the blind, LectureNotes in Electrical Engineering, 75, 2016, 331–349.[10] W. Elmannai and K. Elleithy, Sensor-based assistive devicesfor visually-impaired people: current status, challenges, andfuture directions, Sensors, 17(3), 2017, 565.[11] P.M. Lewis, L.N. Ayton, R.H. Guymer, et al., Advancesin implantable bionic devices for blindness: A review, ANZJournal of Surgery, 86(9), 2016, 654–659.[12] L. Renier and A.G.D. Volder, Sensory substitution devices(New York, USA: Oxford Handbooks, 2013).[13] G. Motta, T. Ma, K. Liu, et al., Overview of smart whitecanes: connected smart cane from front end to back end,in R. Velazquez (ed.), Mobility of visually impaired people(Cham: Springer, 2018), 469–535.[14] W. Zhang, Y. Lin, and N. Sinha, On the function-behavior-structure model for design, Proceedings of the CanadianEngineering Education Association, 2005, 1–8.[15] W. Zhang and J. Wang, Design theory and methodologyfor enterprise systems, Enterprise Information Systems, 10,2016, 245–248.[16] Z.M. Zhang, Q. An, J.W. Li, and W.J. Zhang, Piezoelec-tric friction–inertia actuator—A critical review and futureperspective, The International Journal of Advanced Manu-facturing Technology, 62(5–8), 2012, 669–685.[17] Y. Lin, Towards Intelligent Human–Machine Interactions:Human Assistance Systems (HAS), ASME Magazine SpecialIssue on Human-Machine Interactions, 2017, 139(06), 4–8.[18] D.I. Anderson, J.J. Campos, D.C. Witherington, et al., Therole of locomotion in psychological development, Frontiers inPsychology, 4(2), 2013, 1–7.[19] A. Mihailovic, B.K. Swenor, D.S. Friedman, S.K. West,L.N. Gitlin, and P.Y. Ramulu, Gait implications of visualfield damage from glaucoma, Translational Vision Science &Technology, 6(3), 2017, 23.[20] K.A. Turano, D.R. Geruschat, F.H. Baker, J.W. Stahl, andM.D. Shapiro, Direction of gaze while walking a simpleroute: persons with normal vision and persons with retini-tis pigmentosa, Optometry & Vision Science, 78(9), 2001,667–675.[21] P.A. Aspinall, S. Borooah, C. Al Alouch, et al., Gaze andpupil changes during navigation in age-related macular de-generation, British Journal of Ophthalmology, 98(10), 2014,1393–1397.[22] A. Pasqualotto and M.J. Proulx, The role of visual experiencefor the neural basis of spatial cognition, Neuroscience &Biobehavioral Reviews, 36(4), 2012, 1179–1187.[23] A. Pasqualotto, J.S.Y. Lam, and M.J. Proulx, Congeni-tal blindness improves semantic and episodic memory, Be-havioural Brain Research, 244(Supplement C), 2013, 162–165.[24] E. Peli and J.-H. Jung, Multiplexing prisms for field expansion,Optometry and Vision Science, 94(8), 2017, 817–829.[25] A.D. Hwang and E. Peli, An augmented-reality edge enhance-ment application for Google Glass, Optometry and VisionScience: Official Publication of the American Academy ofOptometry, 91(8), 2014, 1021–1030.[26] C. Hu, G. Zhai, and D. Li. An Augmented-Reality nightvision enhancement application for see-through glasses, IEEEInternational Conference on Multimedia & Expo Workshops,2015.[27] E.M. Schmidt, M.J. Bak, F.T. Hambrecht, C.V. Kufta, D.K.O’rourke, and P. Vallabhanath, Feasibility of a visual pros-thesis for the blind based on intracortical micro stimulationof the visual cortex, Brain, 119(2), 1996, 507–522.[28] I. B´okkon, Phosphene phenomenon: A new concept, Biosys-tems, 92(2), 2008, 168–174.[29] P.M. Lewis, H.M. Ackland, A.J. Lowery, and J.V. Rosenfeld,Restoration of vision in blind individuals using bionic devices:A review with a focus on cortical visual prostheses, BrainResearch, 1595, 2015, 51–73.[30] A. L´ecuyer, P. Mobuchon, C. M´egard, J. Perret, C. Andriot,and J.-P. Colinot, HOMERE: A multimodal system for visu-ally impaired people to explore virtual environments, IEEEVirtual Reality, 2003, 251–258.[31] S. Wong, Traveling with blindness: a qualitative space-timeapproach to understanding visual impairment and urbanmobility, Health & Place, 49, 2018, 85–92.[32] E.T. Hall, The hidden dimension, Hidden Dimension, 6(1),1966, 94.[33] P. Strumillo, Electronic interfaces aiding the visually im-paired in environmental access, mobility and navigation, IEEE3rd International Conference on Human System Interaction,2010, 17–24.[34] B. Tversky, Spatial Intelligence: Why It Matters from BirthThrough the Lifespan (New York, USA: Routledge, 2017).[35] B. Tversky, On abstraction and ambiguity, in J.S. Gero (ed.),Studying visual and spatial reasoning for design creativity(Dordrecht: Springer Netherlands, 2015), 215–223.[36] A. Pasqualotto, M.J. Spiller, A.S. Jansari, and M.J. Proulx,Visual experience facilitates allocentric spatial representation,Behavioural Brain Research, 236, 2013, 175–179.[37] A. Pasqualotto and T. Esenkaya, Sensory substitution: thespatial updating of auditory scenes “Mimics the spatial up-dating of visual scenes, Frontiers in Behavioral Neuroscience,10, 2016, 79.[38] B. R¨oder, F. R¨osler, and H.J. Neville, Event-related potentialsduring auditory language processing in congenitally blind andsighted people, Neuropsychologia, 38(11), 2000, 1482–1502.[39] B. R¨oder, F. R¨osler, and H.J. Neville, Auditory memory incongenitally blind adults: a behavioral-electrophysiologicalinvestigation. Cognitive Brain Research, 11(2), 2001, 289–303.[40] H. Siamian, M. Hassanzadeh, F. Nooshinfard, and N. Hariri,Information seeking behavior in blind people of Iran: A survey594based on various experiences faced by them, Health Sciences,3(4), 2016, 1–5.[41] A.J. Kolarik, R. Raman, B.C. Moore, S. Cirstea,S. Gopalakrishnan, and S. Pardhan, Partial visual loss affectsself-reports of hearing abilities measured using a modifiedversion of the speech, spatial, and qualities of hearingquestionnaire, Frontiers in Psychology, 8, 2017, 1–16.[42] A.J. Kolarik, A.C. Scarfe, B.C.J. Moore, and S. Pardhan,Blindness enhances auditory obstacle circumvention: As-sessing echolocation, sensory substitution, and visual-basednavigation, PLoS One, 12(4), 2017, e0175750.[43] A.C. Livingstone, G.J. Christie, R.D. Wright, andJ.J. Mcdonald, Signal enhancement, not active suppression,follows the contingent capture of visual attention, Jour-nal of Experimental Psychology: Human Perception andPerformance, 43(2), 2017, 219–224.[44] P. Voss, Auditory spatial perception without vision, Frontiersin Psychology, 7, 2016, 1–7.[45] C. Lane, S. Kanjlia, H. Richardson, A. Fulton, A. Omaki, andM. Bedny, Reduced left lateralization of language in congen-itally blind individuals, Journal of Cognitive Neuroscience,29(1), 2016, 1–14.[46] K.J. Price, M. Lin, J. Feng, R. Goldman, A. Sears, andJ. Jacko, Nomadic speech-based text entry: A decision modelstrategy for improved speech to text processing, Interna-tional Journal of Human–Computer Interaction, 25(7), 2009,692–706.[47] R.J. Lutz, Prototyping and evaluation of landcons, ACMSIGACCESS Accessibility & Computing, 86, 2006, 8–11.[48] J. Kostiainen, C. Erkut, and F.B. Piella, Design of anaudio-based mobile journey planner application, InternationalAcademic Mindtrek Conference: Envisioning Future MediaEnvironments, 2011, 107–113.[49] M. Jeon and B.N. Walker, Spindex (speech index) improvesauditory menu acceptance and navigation performance, ACMTransactions on Accessible Computing, 3(3), 2011, 1–26.[50] B.K. Davison, Menu navigation with in-vehicle technolo-gies: Auditory menu cues improve dual task performance,preference, and workload, International Journal of Human–Computer Interaction, 31(1), 2015, 1–16.[51] ´A. Csap´o and G. Wers´enyi, Overview of auditory representa-tions in human-machine interfaces, ACM Computing Surveys,46(2), 2013, 1–23.[52] I. Hussain, L. Chen, H.T. Mirza, G. Chen, and S.-U. Hassan,Right mix of speech and non-speech: hybrid auditory feedbackin mobility assistance of the visually impaired, UniversalAccess in the Information Society, 14(4), 2015, 527–536. [54].4.2 TactusTactus or haptic perception [55], one of the somatosenses,can be further separated into three parts, namely touchfeeling, vibration and electric stimulus feeling (Fig. 3).It is difficult to distinguish concepts of touch feelingand vibration. In our opinion, the touch feeling means thefeeling given by the texture of an object in contact whenwe stroke or touch this object. The vibration means thefeeling caused by external forces. Because the stimulationamount of touch feeling is less than that of vibration, fewinvestigators have applied the touch feeling as feedback inassistive devices. Our research group used an electricalcompass and a servo-driven pointer to develop an indoorlocalization system [56]. This system can give the directioninformation to the user with touch stimulation. In terms ofelectric stimulus feeling, it is arisen by electrical stimulationand can be used for a visual prosthesis. Like audition,tactus is also commonly used in a feedback interface forassistive devices.Heller et al. [57] systematically investigated the hapticpattern perceived by blind individuals. They stated thatthe tactus is a crucial sense, which can be used to substitutefor vision. Results of the experiments conducted by Occelliet al. [58] show that people with early-onset blindnessreflect greater haptic sensitivity than the sighted. Theyalso validated the hypothesis that people losing visionearly can recognize objects by their haptic perceptionregardless of spatial transformations. Picard et al. [59]invited children, adolescents and young adults to comparetheir haptic memory capacities. The result demonstratedthat the haptic memory ability is an age-related skill.Carpio et al. [60] found that there is no significantdifference between blind and sighted school students incontent acquisition or aesthetic appreciation of images.This indicates that the blind people can experience theworld through their haptic perception and eventually reachthe same cognitive level of sighted people. The researchfindings of Puspitawati et al. [61] showed that, comparedto VIPs with slight visual impairment, the people withtotal blindness have the faster speed of processing hapticinformation. This may further illustrate that, for VIPs andblind people, dependence on tactile perception increaseswith the severity of visual impairment. Therefore, afeedback module of an assistive device can be designed tomeet the needs of the people with varying degrees of visualimpairment.5. Assistive Devices for Blind and VisuallyImpaired PersonsAssistive technology, one of the information accessibilitytechnologies, has attracted considerable attention world-wide owing to its remarkable social significance [4], [62].Over the past decade, a variety of assistive devices havebeen developed for functional assistances of VIPs and blindpeople. We summarize these devices in the following sec-tions. In Tables 1 and 2, although several assistancedevices offer the same functionality, there exist differ-ences in types of sensors used, feedback modes, hardwareframeworks and data processing algorithms. Validationexperiments are important for assistance devices, and,therefore, investigators design different experiments, aim-ing to verify their feasibilities and reliabilities of completingthe specific task.584Table 1Summary of Assistive Canes for VIPs and Blind People about Sensors Used and Feedback Producedas well as Validation MethodsStudy Sensor Feedback Functionality ValidationGupta et al.[76]Ultrasonic sensor;GPS receiverAudition Navigation Tested in computerFan et al. [86] Ultrasonic sensor;GPS receiver;RGB-D cameraAudition;vibrationNavigation Tested inoutdoor open areaSilva andDias [90]Ultrasonic sensor;inertia measurementunitAudition Obstacle detection Tested by obstacles inthe pathKumar et al.[75]Ultrasonic sensor Audition Obstacle and potholesdetectionTested by 10volunteersMajeed andBaadel [73]RGB camera with270◦lensAudition Facial recognition Tested in databaseSatpute et al.[91]Ultrasonic sensor;GPS receiverAudition;vibrationNavigation; obstacledetectionNoneRizzo et al.[92]Adaptive mobilitydevicesVibration Drop-off and obstacledetectionTested by 6 adultsShah et al.[77]Ultrasonic sensor Audition Navigation; obstacleand potholesdetectionNoneSharma et al.[78]Ultrasonic sensor Audition;vibrationStatic and dynamicobstacles detectionTested in real-timeenvironmentKrishnanet al. [81]Ultrasonic sensor;GPS receiver;RGB cameraAudition Navigation; obstacledetectionTested in databaseBolgiano andMeeks [70]Laser Audition;vibrationObstacle detection NoneSugimotoet al. [93]Ultrasonic sensor;GPS receiverVibration Navigation; obstacledetectionTested in presetscenariosWankhadeet al. [94]Infrared sensor Audition;vibrationObstacle detection NoneKassim et al.[88]RFID network;digital compassAudition Indoor navigation Tested by the mobilerobot and humansubjectVera et al. [71] RGB camera; laserpointerVibration Obstacle detection Tested by 16 sightedpersonsAlwis andSamarawickrama[95]Ultrasonic sensor Audition;vibrationObstacle detection NonePisa et al. [89] FMCW radar None Obstacle detection Tested by obstaclewith differentdistancesBuchs et al. [80] Infrared sensors Audition;vibrationWaist-up obstaclesdetectionTested by the trainedblind participants(continued )585Table 1ContinuedPinto et al. [96] Ultrasonic sensor;GPS receiverAudition;vibrationObstacle detection Tested by obstaclewith differentdistancesYe et al. [74] 3D camera Audition Obstacle detection;pose estimationValidated by datafrom various indoorscenesDang et al. [72] Linear laser;RGB camera;inertial measure-mentunitAudition Obstacle detectionand recognitionValidated by theobstacles with variousheights, types,distancesNiitsu et al. [82] Ultrasonic sensor;infrared sensor;compass; tri-axialaccelerometerAudition(boneconduction)Obstacle detection Examined in 20 timesby 1 userTakizawa et al.[87]Kinect sensor Vibration Object recognition Tested by 2blindfolded personsJeong and Yu [97] Ultrasonic sensor Vibration Obstacle detection Tested by 4 blindfolded and 10 blindpersonsBay AdvancedTechnologies Ltd.[79]Ultrasonic sensor Audition;vibrationObstacle detection NoneScherlen et al. [83] Infrared sensor;brilliance;water sensorsNone Object recognition NoneKim et al. [84] Ultrasonic sensor;colour sensor;Cds photo resistorAudition;vibrationObstacle detection Validate the usabilityby 7 types of criteriaShim and Yoon[85]Ultrasonic sensor;infrared sensor;contact sensor(two antennas)Audition Obstacle detection NoneNearly all assistive devices listed later belong to SSDs.SSDs have been around for 40 years. The vibrotactilesensors were usually placed on the back to develop as-sistive device [63]. Subsequently, some investigators putan artificial sensor on the tongue [64]. The latter is theantecedent to the commercial BrainPort that is cited inTable 3. More recent, and highly promising, is the audi-tory device The vOICe [65]. It has been studied exten-sively for localization [66] and object identification [67].There have been numerous neuroscience studies showingthat The vOICe activates visual cortex in the blind asthey perform tasks with images – suggesting that one cantruly ‘see’ with the sound output of the device [68]. Thesedevices in early stages have been widely validated in vari-ous tasks, settings and user groups. Thus the success anduse are more easily ascertained than many devices cited inTables 1 and 2.5.1 Vision Substitution by Assistive CanesThe use of assistive cane is critical in reducing the risk ofcollision, which can help VIPs and blind people to walkmore confidently. Table 1 summarizes some assistive canesdesigned for VIPs and blind people.In general, an assistive cane is developed by mountingsensing and feedback modules on a classic white cane. Sub-sequently, the assistive cane acquires information with re-spect to surroundings and transmits raw or (pre-)processeddata to users via predefined feedback approach [70]Laser Audition;vibrationObstacle detection NoneSugimotoet al. [93]Ultrasonic sensor;GPS receiverVibration Navigation; obstacledetectionTested in presetscenariosWankhadeet al. [94]Infrared sensor Audition;vibrationObstacle detection NoneKassim et al.[88]RFID network;digital compassAudition Indoor navigation Tested by the mobilerobot and humansubjectVera et al. [71] RGB camera; laserpointerVibration Obstacle detection Tested by 16 sightedpersonsAlwis andSamarawickrama[95]Ultrasonic sensor Audition;vibrationObstacle detection NonePisa et al. [89] FMCW radar None Obstacle detection Tested by obstaclewith differentdistancesBuchs et al. [80] Infrared sensors Audition;vibrationWaist-up obstaclesdetectionTested by the trainedblind participants(continued )585Table 1ContinuedPinto et al. [96] Ultrasonic sensor;GPS receiverAudition;vibrationObstacle detection Tested by obstaclewith differentdistancesYe et al. [74] 3D camera Audition Obstacle detection;pose estimationValidated by datafrom various indoorscenesDang et al. [73]RGB camera with270◦lensAudition Facial recognition Tested in databaseSatpute et al.[91]Ultrasonic sensor;GPS receiverAudition;vibrationNavigation; obstacledetectionNoneRizzo et al.[92]Adaptive mobilitydevicesVibration Drop-off and obstacledetectionTested by 6 adultsShah et al.[77]Ultrasonic sensor Audition Navigation; obstacleand potholesdetectionNoneSharma et al.[78]Ultrasonic sensor Audition;vibrationStatic and dynamicobstacles detectionTested in real-timeenvironmentKrishnanet al. [81]Ultrasonic sensor;GPS receiver;RGB cameraAudition Navigation; obstacledetectionTested in databaseBolgiano andMeeks [70]Laser Audition;vibrationObstacle detection NoneSugimotoet al. [93]Ultrasonic sensor;GPS receiverVibration Navigation; obstacledetectionTested in presetscenariosWankhadeet al. [94]Infrared sensor Audition;vibrationObstacle detection NoneKassim et al.[88]RFID network;digital compassAudition Indoor navigation Tested by the mobilerobot and humansubjectVera et al. [71] RGB camera; laserpointerVibration Obstacle detection Tested by 16 sightedpersonsAlwis andSamarawickrama[95]Ultrasonic sensor Audition;vibrationObstacle detection NonePisa et al. [89] FMCW radar None Obstacle detection Tested by obstaclewith differentdistancesBuchs et al. [80] Infrared sensors Audition;vibrationWaist-up obstaclesdetectionTested by the trainedblind participants(continued )585Table 1ContinuedPinto et al. [96] Ultrasonic sensor;GPS receiverAudition;vibrationObstacle detection Tested by obstaclewith differentdistancesYe et al. [76]Ultrasonic sensor;GPS receiverAudition Navigation Tested in computerFan et al. [86] Ultrasonic sensor;GPS receiver;RGB-D cameraAudition;vibrationNavigation Tested inoutdoor open areaSilva andDias [90]Ultrasonic sensor;inertia measurementunitAudition Obstacle detection Tested by obstacles inthe pathKumar et al.[75]Ultrasonic sensor Audition Obstacle and potholesdetectionTested by 10volunteersMajeed andBaadel [73]RGB camera with270◦lensAudition Facial recognition Tested in databaseSatpute et al.[91]Ultrasonic sensor;GPS receiverAudition;vibrationNavigation; obstacledetectionNoneRizzo et al.[92]Adaptive mobilitydevicesVibration Drop-off and obstacledetectionTested by 6 adultsShah et al. [77]Ultrasonic sensor Audition Navigation; obstacleand potholesdetectionNoneSharma et al. [78]Ultrasonic sensor Audition;vibrationStatic and dynamicobstacles detectionTested in real-timeenvironmentKrishnanet al. [81]Ultrasonic sensor;GPS receiver;RGB cameraAudition Navigation; obstacledetectionTested in databaseBolgiano andMeeks [70]Laser Audition;vibrationObstacle detection NoneSugimotoet al. [93]Ultrasonic sensor;GPS receiverVibration Navigation; obstacledetectionTested in presetscenariosWankhadeet al. [94]Infrared sensor Audition;vibrationObstacle detection NoneKassim et al.[88]RFID network;digital compassAudition Indoor navigation Tested by the mobilerobot and humansubjectVera et al. [71] RGB camera; laserpointerVibration Obstacle detection Tested by 16 sightedpersonsAlwis andSamarawickrama[95]Ultrasonic sensor Audition;vibrationObstacle detection NonePisa et al. [89] FMCW radar None Obstacle detection Tested by obstaclewith differentdistancesBuchs et al. [80] Infrared sensors Audition;vibrationWaist-up obstaclesdetectionTested by the trainedblind participants(continued )585Table 1ContinuedPinto et al. [96] Ultrasonic sensor;GPS receiverAudition;vibrationObstacle detection Tested by obstaclewith differentdistancesYe et al. [74] 3D camera Audition Obstacle detection;pose estimationValidated by datafrom various indoorscenesDang et al. [72] Linear laser;RGB camera;inertial measure-mentunitAudition Obstacle detectionand recognitionValidated by theobstacles with variousheights, types,distancesNiitsu et al. [82] Ultrasonic sensor;infrared sensor;compass; tri-axialaccelerometerAudition(boneconduction)Obstacle detection Examined in 20 timesby 1 userTakizawa et al.[87]Kinect sensor Vibration Object recognition Tested by 2blindfolded personsJeong and Yu [97] Ultrasonic sensor Vibration Obstacle detection Tested by 4 blindfolded and 10 blindpersonsBay AdvancedTechnologies Ltd. [81]Ultrasonic sensor;GPS receiver;RGB cameraAudition Navigation; obstacledetectionTested in databaseBolgiano andMeeks [70]Laser Audition;vibrationObstacle detection NoneSugimotoet al. [93]Ultrasonic sensor;GPS receiverVibration Navigation; obstacledetectionTested in presetscenariosWankhadeet al. [94]Infrared sensor Audition;vibrationObstacle detection NoneKassim et al.[88]RFID network;digital compassAudition Indoor navigation Tested by the mobilerobot and humansubjectVera et al. [71] RGB camera; laserpointerVibration Obstacle detection Tested by 16 sightedpersonsAlwis andSamarawickrama[95]Ultrasonic sensor Audition;vibrationObstacle detection NonePisa et al. [89] FMCW radar None Obstacle detection Tested by obstaclewith differentdistancesBuchs et al. [80] Infrared sensors Audition;vibrationWaist-up obstaclesdetectionTested by the trainedblind participants(continued )585Table 1ContinuedPinto et al. [96] Ultrasonic sensor;GPS receiverAudition;vibrationObstacle detection Tested by obstaclewith differentdistancesYe et al. [74] 3D camera Audition Obstacle detection;pose estimationValidated by datafrom various indoorscenesDang et al. [72] Linear laser;RGB camera;inertial measure-mentunitAudition Obstacle detectionand recognitionValidated by theobstacles with variousheights, types,distancesNiitsu et al. [82] Ultrasonic sensor;infrared sensor;compass; tri-axialaccelerometerAudition(boneconduction)Obstacle detection Examined in 20 timesby 1 userTakizawa et al.[87]Kinect sensor Vibration Object recognition Tested by 2blindfolded personsJeong and Yu [97] Ultrasonic sensor Vibration Obstacle detection Tested by 4 blindfolded and 10 blindpersonsBay AdvancedTechnologies Ltd.[79]Ultrasonic sensor Audition;vibrationObstacle detection NoneScherlen et al. [83] Infrared sensor;brilliance;water sensorsNone Object recognition NoneKim et al. [84] Ultrasonic sensor;colour sensor;Cds photo resistorAudition;vibrationObstacle detection Validate the usabilityby 7 types of criteriaShim and Yoon [86] Ultrasonic sensor;GPS receiver;RGB-D cameraAudition;vibrationNavigation Tested inoutdoor open areaSilva andDias [90]Ultrasonic sensor;inertia measurementunitAudition Obstacle detection Tested by obstacles inthe pathKumar et al.[75]Ultrasonic sensor Audition Obstacle and potholesdetectionTested by 10volunteersMajeed andBaadel [73]RGB camera with270◦lensAudition Facial recognition Tested in databaseSatpute et al.[91]Ultrasonic sensor;GPS receiverAudition;vibrationNavigation; obstacledetectionNoneRizzo et al.[92]Adaptive mobilitydevicesVibration Drop-off and obstacledetectionTested by 6 adultsShah et al.[77]Ultrasonic sensor Audition Navigation; obstacleand potholesdetectionNoneSharma et al.[78]Ultrasonic sensor Audition;vibrationStatic and dynamicobstacles detectionTested in real-timeenvironmentKrishnanet al. [81]Ultrasonic sensor;GPS receiver;RGB cameraAudition Navigation; obstacledetectionTested in databaseBolgiano andMeeks [70]Laser Audition;vibrationObstacle detection NoneSugimotoet al. [93]Ultrasonic sensor;GPS receiverVibration Navigation; obstacledetectionTested in presetscenariosWankhadeet al. [94]Infrared sensor Audition;vibrationObstacle detection NoneKassim et al.[88]RFID network;digital compassAudition Indoor navigation Tested by the mobilerobot and humansubjectVera et al. [71] RGB camera; laserpointerVibration Obstacle detection Tested by 16 sightedpersonsAlwis andSamarawickrama[95]Ultrasonic sensor Audition;vibrationObstacle detection NonePisa et al. [89] FMCW radar None Obstacle detection Tested by obstaclewith differentdistancesBuchs et al. [80] Infrared sensors Audition;vibrationWaist-up obstaclesdetectionTested by the trainedblind participants(continued )585Table 1ContinuedPinto et al. [96] Ultrasonic sensor;GPS receiverAudition;vibrationObstacle detection Tested by obstaclewith differentdistancesYe et al. [74] 3D camera Audition Obstacle detection;pose estimationValidated by datafrom various indoorscenesDang et al. [72] Linear laser;RGB camera;inertial measure-mentunitAudition Obstacle detectionand recognitionValidated by theobstacles with variousheights, types,distancesNiitsu et al. [82] Ultrasonic sensor;infrared sensor;compass; tri-axialaccelerometerAudition(boneconduction)Obstacle detection Examined in 20 timesby 1 userTakizawa et al. [88]RFID network;digital compassAudition Indoor navigation Tested by the mobilerobot and humansubjectVera et al. [71] RGB camera; laserpointerVibration Obstacle detection Tested by 16 sightedpersonsAlwis andSamarawickrama[95]Ultrasonic sensor Audition;vibrationObstacle detection NonePisa et al. [90]Ultrasonic sensor;inertia measurementunitAudition Obstacle detection Tested by obstacles inthe pathKumar et al.[75]Ultrasonic sensor Audition Obstacle and potholesdetectionTested by 10volunteersMajeed andBaadel [73]RGB camera with270◦lensAudition Facial recognition Tested in databaseSatpute et al. [91]Ultrasonic sensor;GPS receiverAudition;vibrationNavigation; obstacledetectionNoneRizzo et al. [92]Adaptive mobilitydevicesVibration Drop-off and obstacledetectionTested by 6 adultsShah et al.[77]Ultrasonic sensor Audition Navigation; obstacleand potholesdetectionNoneSharma et al.[78]Ultrasonic sensor Audition;vibrationStatic and dynamicobstacles detectionTested in real-timeenvironmentKrishnanet al. [81]Ultrasonic sensor;GPS receiver;RGB cameraAudition Navigation; obstacledetectionTested in databaseBolgiano andMeeks [70]Laser Audition;vibrationObstacle detection NoneSugimotoet al. [93]Ultrasonic sensor;GPS receiverVibration Navigation; obstacledetectionTested in presetscenariosWankhadeet al. [94]Infrared sensor Audition;vibrationObstacle detection NoneKassim et al.[88]RFID network;digital compassAudition Indoor navigation Tested by the mobilerobot and humansubjectVera et al. [71] RGB camera; laserpointerVibration Obstacle detection Tested by 16 sightedpersonsAlwis andSamarawickrama [95]Ultrasonic sensor Audition;vibrationObstacle detection NonePisa et al. [89] FMCW radar None Obstacle detection Tested by obstaclewith differentdistancesBuchs et al. [80] Infrared sensors Audition;vibrationWaist-up obstaclesdetectionTested by the trainedblind participants(continued )585Table 1ContinuedPinto et al. [96] Ultrasonic sensor;GPS receiverAudition;vibrationObstacle detection Tested by obstaclewith differentdistancesYe et al. [74] 3D camera Audition Obstacle detection;pose estimationValidated by datafrom various indoorscenesDang et al. [72] Linear laser;RGB camera;inertial measure-mentunitAudition Obstacle detectionand recognitionValidated by theobstacles with variousheights, types,distancesNiitsu et al. [82] Ultrasonic sensor;infrared sensor;compass; tri-axialaccelerometerAudition(boneconduction)Obstacle detection Examined in 20 timesby 1 userTakizawa et al.[87]Kinect sensor Vibration Object recognition Tested by 2blindfolded personsJeong and Yu [97] Ultrasonic sensor Vibration Obstacle detection Tested by 4 blindfolded and 10 blindpersonsBay AdvancedTechnologies Ltd.[79]Ultrasonic sensor Audition;vibrationObstacle detection NoneScherlen et al. [83] Infrared sensor;brilliance;water sensorsNone Object recognition NoneKim et al. [84] Ultrasonic sensor;colour sensor;Cds photo resistorAudition;vibrationObstacle detection Validate the usabilityby 7 types of criteriaShim and Yoon[85]Ultrasonic sensor;infrared sensor;contact sensor(two antennas)Audition Obstacle detection NoneNearly all assistive devices listed later belong to SSDs.SSDs have been around for 40 years. The vibrotactilesensors were usually placed on the back to develop as-sistive device [63]. Subsequently, some investigators putan artificial sensor on the tongue [64]. The latter is theantecedent to the commercial BrainPort that is cited inTable 3. More recent, and highly promising, is the audi-tory device The vOICe [65]. It has been studied exten-sively for localization [66] and object identification [67].There have been numerous neuroscience studies showingthat The vOICe activates visual cortex in the blind asthey perform tasks with images – suggesting that one cantruly ‘see’ with the sound output of the device [68]. Thesedevices in early stages have been widely validated in vari-ous tasks, settings and user groups. Thus the success anduse are more easily ascertained than many devices cited inTables 1 and 2.5.1 Vision Substitution by Assistive CanesThe use of assistive cane is critical in reducing the risk ofcollision, which can help VIPs and blind people to walkmore confidently. Table 1 summarizes some assistive canesdesigned for VIPs and blind people.In general, an assistive cane is developed by mountingsensing and feedback modules on a classic white cane. Sub-sequently, the assistive cane acquires information with re-spect to surroundings and transmits raw or (pre-)processeddata to users via predefined feedback approach [69].Bolgiano and Meeks [70] first put a laser into a caneto detect obstacles in the traveling path, and audio andvibratory signals were available when VIPs and blindpeople approach the obstacle.Vera et al. [71] used an RGB camera and a laserpointer in combination to develop a virtual white cane for586Table 2Summary of Assistive Glasses for VIPs and Blind People about Sensors Used and Feedback Producedas well as Validation MethodsStudy Sensor Feedback Functionality ValidationSadi et al. [98] Ultrasonic sensor Audition Obstacle detection Tested in lab conditionsKassim et al. [99] Ultrasonic sensor Audition;vibrationObstacle detection Validated by blind spotevaluation experimentYi and Tian [100] RGB camera Audition Text reading fromnatural sceneTested by 10 blind personsEverding et al.[102]RGB camera Audition Obstacle detection Tested by 2 experiments(11 and 5 persons,respectively)Wang et al. [103] RGB camera Audition Navigation; wayfindingEvaluated in databasesHassan andTang [102]RGB camera Audition Obstacle detection Tested by 2 experiments(11 and 5 persons,respectively)Wang et al. [103] RGB camera Audition Navigation; wayfindingEvaluated in databasesHassan andTang [101]RGB camera Audition Text recognition Tested by severalsample textsPundlik et al. [104]Google Glass Vision Smartphone screenmagnificationEvaluated by 8 sighted and4 visually impaired personsNeto et al. [105] RGB-D camera 3D audition Face recognition Validated in databases andby both blindfolded andvisually impaired usersStoll et al. [106] RGB-D camera Audition Indoor navigation Validated by 2 performancemetrics i.e. travel timeand errorHicks et al. [107] RGB-D camera Vision Scene recognitionand analysisTested by 4 sighted and12 visually impairedparticipantsWu et al. [108] Pico projector;optical lensesVision Vision enhancement In simulated stageLan et al. [112] RGB camera Audition Public signrecognitionTested by some commonpublic signsHu et al. [26] RGB camera Vision Night visionenhancementEvaluated on custom-builtdatabasesVIPs and blind people. In their device, the RGB camerain smartphone captures the laser beam reflection, and thedistance from the cane to the obstacle is calculated usingactive triangulation. Through the personalized vibrationgenerated by smartphone, the user will be warned if pos-sible obstacles are located in traveling path. Furthermore,the magnitude of vibration is applied for the quantizationof distance. Results of validated experiments demonstratedthat the travel time of virtual white cane is less than thatof the traditional white cane. The assistive cane equippedwith the point laser may fail to detect the potholes and theobstacles in small and tiny size.Dang et al. [72] proposed an assistive cane using alinear laser, an RGB camera and an inertial measurementunit as sensors to classify the type of obstacle and estimatethe distance from the obstacle to the user. The inertialmeasurement unit is an electronic device that measures auser’s angular rate to determine spatial coordinate frames.The inertial sensor tracks the position of laser stripe in thenavigation coordinate frame, and the subsequent analysisof the laser point coordinates in regard to the originallaser stripe can divide obstacles into walls, stairs andblocks. The information gathered is transmitted to theuser via a simple nonspeech feedback. The performanceof this assistive cane is easily influenced by the strongillumination, thereby limiting the application scope of thisassistive cane.Due to the limited detecting or scanning range whenusing the laser as a sensor, we can only detect objectslocated in the region where the laser illuminates. To over-come this shortcoming, we need to leverage spatial infor-mation recorded by RGB camera. Majeed and Baadel[73] integrated an RGB camera with 270◦lens into anassistive cane, thus allowing us to capture much of envi-ronmental information. The proposed smart cane can helpVIPs and blind people to dodge obstacles placed at the587Table 3Summary of Some Assistive Glasses Which Are Available on the MarketName Company LaunchdateFunctionality Brief descriptionGoogleGlass [113]Google Inc. 2012 Direction recognition It is equipped with the RGB camera andgyroscope and has all the functions ofmobile phone. As feedback, it can transmit theinformation to the user via the bone-conductionearphone and display screen. Google Glass is notdesigned for visual assistance of the VIPs andblind people, but we can do secondarydevelopment based on iteSight 3[114]eSight Co. 2017 No specific functiondescriptionIt is mainly designed for the individuals who arenot completely blind. A high speed and qualitycamera is loaded in this glass to capture whatthe user is browsing. The obtained videos arefirst subjected to image-enhancement processingand then shown in two OLED screens. From thedisplay way, eSight 3 is something like the virtualreality display deviceOrCam[115]OrCamTechnologiesLtd.2015 Text reading; facerecognition; product andmoney identificationOrCam mainly consists of the RGB camera andportable computer. It can be fixed on any eyeglassframe and informs the user outside informationvia the audio signalsEnchroma[116]Enchroma,Inc.2013 Colour contrastenhancementEnchroma is designed for the colour blindness.It does not leverage any digital processingtechnology. Enchroma alters the original wavesusing the specially designed lenses to help thepersons of colour vision deficiency see thereal colourIntoer [117] HangzhouKR-VISIONTechnologyCo., Ltd.2017 Obstacle detection;scene, money, puddle,staircase, traffic signal andzebra crossing recognition;navigationIt uses the infrared binocular camera to recordthe environmental information illuminated by thenatural and structural light. It produces thespecial encoded stereo to inform the user viathe bone-conduction earphoneBrainPort rV100 [118]Wicab, Inc. 2015 Obstacle detection;scene recognitionBrainPort rV100 is mainly composed of the RGBcamera mounted on a pair of glasses, hand-heldcontroller and tongue array containing 400electrodes. The outside information is convertedinto electrical signals that are sent to the tonguearray on the tongue of the user. Before using thisdevice, there is a training phasemaximum distance of 10 m, and moreover, it can be utilizedto recognize different persons’ faces.Ye et al. [74] used a three-dimensional (3D) camera asa sensor to develop an assistive cane, aiming to estimatingpose and recognizing obstacle. The type of 3D cameraused in their study is SwissRanger SR4000, which is asmall-sized (65 × 65 × 68 mm3) 3D time-of-flight camera.The speech feedback module serves as the communicationmedia between human and cane. This assistive canewas validated by data collected from a variety of indoorscenes. Results demonstrated that the proposed canecould estimate pose and recognize objects with satisfactoryperformance. In their article, developers stated that theywere working with orientation and mobility specialists aswell as blind trainees of the World Service for the Blind inArkansas to refine functions of their assistive cane.Apart from the laser and RGB camera, the ultrasonicsensor is one of the widely used sensors in assistive de-vice owing to its high-price/performance ratio. The ultra-sonic sensor emits ultrasonic waves in the air, and thenthe reflected sound is received by the sensor. This sen-sor is always applied for detecting objects and measuringdistance. Kumar et al. [75] developed an ultrasonic canefor aiding the blind people to navigate. This ultrasonic cane588is equipped with three pairs of ultrasonic trans-receivers,thus enabling the blind people to know aerial and groundobstacles as well as potholes in front of them via audiowarnings. The maximum working range of this ultrasoniccane is 1.5 m, which is much less than that of the canedeveloped by Majeed and Baadel.Gupta et al. [76] used an ultrasonic sensor and aGPS receiver together in classic canes. The addition ofGPS module allows VIPs and blind people to travel out-doors using satellite network. Audio signals generated byPygame module, a programming module to create gamesand animations, were used as the feedback to remind users.The range of distance measured by the attached ultrasonicsensor in cane is from 0.05 to 2 m, which is slightly largerthan that of the device developed by Kumar et al.Several investigators reported that they used an ultra-sonic sensor to establish assistive canes. Shah et al. [77]arranged four ultrasonic sensors in a stick. Among theseultrasonic sensors, three ultrasonic sensors are applied forobstacle detection and the remaining one for pothole de-tection. Their experimental results showed that maximumdetection distances of the ultrasonic stick were 1.45, 0.6and 0.82 m when the obstacles located on the front, left-front and right-front, respectively. A similar smart stickwas reported by Sharma et al. [78]. They stated that thissmart stick was able to perceive obstacles of any height infront of or slightly sideways to users. Bay Advanced Tech-nologies Ltd. [79] developed an ultrasonic sensor-basedassistive cane named ‘K’ Sonar, and this cane was availableon the market.Infrared sensor is also a very popular sensor selectedby investigators for the development of the smart cane. Itis an electronic sensor, which works by using a specific lightsensor to detect a selected light wavelength in the infraredspectrum. This sensor can detect infrared light radiatingfrom objects in its view field to detect object and measuredistance. Buchs et al. [80] mounted two infrared sensorson a white cane. One infrared sensor was parallel to thehorizontal plane while the other was approximately 42◦with respect to the horizontal plane. Such arrangement ofinfrared sensors allows this smart cane to detect waist-upobstacles. The detection range of this cane is only 1.5 m.The addition of RGB camera can increase the detectionrange of developed smart cane. Krishnan et al. [81] appliedan ultrasonic sensor and an RGB camera in the sensingmode of smart cane, and the testing result demonstratedthat the maximum detection range was 3 m.Infrared sensor is usually used in conjunction withother types of sensors to form the multi-mode sensingarray. Niitsu et al. [82] put four sensors viz. ultra-sonic sensor, infrared sensor, compass and tri-axial ac-celerometer together on a classic cane. In this smart cane,a bone-conduction headphone was used for human–caneinteraction in such a way that the feedback informationcould be passed to users unobtrusively. This assistivecane based on multi-mode sensing array can achieve thedetection accuracy of 100% for wide obstacles, crossingand approaching persons, while 95% for thin obstacles.It should be noted that the bone conduction may haveinterference with several brain functions. Scherlen et al.[83] leveraged an infrared sensor, a brilliance sensor and awater sensor in combination to develop a smart cane named‘RecognizeCane’, which was capable of recognizing objectsand their constituent materials. At present, four mate-rials, namely metal (steel), glass, cardboard and plastic,can be successfully recognized. Also, the ‘RecognizeCane’can distinguish the zebra crossing and water puddle usingbrilliance and water sensors, respectively. The brilliancesensor was also adopted by Kim et al. [84] in their smartcane to measure environmental brightness information. Todetect obstacles in front accurately, two antennas used asthe contact sensors, an ultrasonic sensor and an infraredsensor, were attached to a sensing unit of a smart cane byShim and Yoon [85]. With the aid of contact sensors, thissmart cane can effectively complement for ultrasonic andinfrared sensors for detection of short-range obstacles.Fan et al. [86], respectively, applied an RGB-D cameraand an ultrasonic sensor to acquire dynamic visual envi-ronmental information and detect obstacles around. TheRGB-D camera is able to obtain synchronized videos ofboth colour and depth. To implement outdoor navigation,they added a GPS module into the sensing unit. Results ofvalidation experiments conducted in the open area demon-strated that the assistive cane installed in this sensingunit can help VIPs and blind people to travel outdoorssafely. However, this cane cannot process the image datacaptured by RGB-D camera in real time. Takizawa et al.[87] also used an RGB-D camera in their sensing unit,and they called this developed cane as the Kinect cane.By the use of RGB-D camera, the Kinect cane can rec-ognize different types of indoor obstacles, including chair,staircase and floor. Two blindfolded persons were invitedto test the performance of proposed cane, and obtainedresults showed that the average search time by Kinectcane was significantly shorter than that by classic whitecane.Some other sensors are also used in sensing unit ofassistive cane. Kassim et al. [88] mounted radio frequencyidentification (RFID) transponders on the floor and theninstalled an RFID reader at the end of cane. RFID is atechnology that records the presence of an object usingradio signals. When walking, the RFID reader reads RFIDtags arranged on the floor in advance, and the addressesof these tags are sent for map processing. Subsequently,the auditory interface emits voice commands such as 90◦turn left after digital compass calibration. Results ofsmall-sample experiment containing two human subjectsshowed that the RFID-based smart cane has a potentialto help VIPs and blind people to walk independently inindoor environments. Frequency-modulated continuouswave (FMCW) radars and antennas were housed in a classicwhite cane by Pisa et al. [89] for obstacle detection. Theresult showed that this cane could receive reflections froma metallic panel up to 5 m. FMCW radar is a short-rangemeasuring radar set capable of determining the distance ofobject in its view field.The assistive cane belongs to the portable assistivedevice. It is compact and lightweight, thus it is easilytaken by users. Despite these advantages, the assistivecane needs to interact with users constantly.5895.2 Vision Substitution by Assistive GlassesAssistive glass is one of the wearable assistive devices. InTable 2, some assistive glasses designed for VIPs and blindpeople are presented. The assistive glass in general fixessensing and feedback modules on a classic glass. Unlike theassistive cane, the assistive glass in general uses the visualsignal as the feedback for users.Sadi et al. [98] embedded an ultrasonic sensor ina traditional glass to develop a smart glass for walkingassistance. The sensing region of attached ultrasonic sensorcovers 3 m distance and 60◦angle. Processed informationthat corresponds to the distance of obstacle is sent tousers via audio signals. Validation experiments carried outin the lab showed that detection accuracies of proposedglass were all beyond 93%. Kassim et al. [99] comparedthe performance of three sensors inclusive of an ultrasonicsensor, an infrared sensor and a laser range by takingseveral metrics such as accuracy, size and weight intoaccount. Finally, they selected ultrasonic sensors for thedevelopment of their assistive glass. As feedback, twowarning modes viz. audition and vibration were designedin their device and users could switch the warning modebased on her or his preference or environment around.Kassim et al. gave an example: when a user comes to anoisy environment such as bus terminal or market, he or shecan use the vibration mode instead of auditory mode, thusallowing the audio sense to hear ambient sounds. A blindspot evaluation experiment demonstrated the effectivenessof proposed smart glass.Except for the ultrasonic sensor, the RGB camera isalso commonly used in the sensing unit of assistive glass,and there are four publications that used RGB cameras toobtain outside information in Table 2. Yi and Tian [100]applied an RGB camera equipped on a glass for assistingVIPs to access text information in their daily lives. Theyreported that the further study should focus on improvingthe detection accuracy of scene text hidden in clutteredbackground. One possible solution for this is to exploremore effective feature representations to establish morerobust models, and subsequently, we write the obtainedmodel into a processing unit of smart glass. A similarresearch was conducted by Hassan and Tang [101]. Theirsmart glass is only suitable for recognizing the text onhardcopy materials. Inspired by the principle of humanvisual perception, Everding et al. [102] deployed two RGBcameras on a classic glass to imitate two human retinas.The performance of their smart glass is satisfactory whensubjects are static. For moving tests, the performance isstill unknown. Wang et al. [103] embedded a saliency mapalgorithm into an RGB camera-based smart glass for thedetection of indoor signs. Experimental results on theirdatabases containing indoor signs and doors showed theusability of their glass. The output information of fourabovementioned publications is all delivered to users usingthe audio form.Pundlik et al. [104] did the secondary development forGoogle Glass to magnify the screen content of smartphone,thereby helping VIPs to easily access information displayedon the screen. They invited eight sighted and four VIP toemploy calculator and music player apps on smartphonewith the aid of proposed glass and built-in screen zoom appof phone. Comparison results showed that the assistiveglass based on Google Glass outperformed the built-inscreen zoom software in improving the ability of VIPs toread screen content.As the RGB-D camera can acquire both colour anddistance information, it has been widely used in assistiveglass. Neto et al. [105] directly tied a Microsoft Kinectsensor to the user’s head, and this assistive device informedthe user outside information via 3D audio signal. Thishardware architecture is somewhat abrupt. The similarhardware framework was adopted by Stoll et al. [106]. Af-ter validation experiments on 21 blindfolded young adultswith 1-week interval, they deemed that this system waspromising for indoor use, but still inefficient for outdoorscenarios. Hicks et al. [107] improved the hardwarearchitecture and made it more like glass. They convertedscene data obtained by RGB-D camera into a depth mapthat nearby objects were rendered into brighter. Subse-quently, processed depth images were displayed on twoOLED screens. With the validation experiment, for VIPs,the average detection distance was approximately 3 m.Hence, further work needs to be done for increasing thedetection distance of objects. The possible solution to thisis to change the mechanical architecture of the glasses asthe see-through display.Wu et al. [108] designed a compact see-through near-eye display system that could be used for the personswho are hyperopic. Unlike most assistive devices, thissystem does not use any digital processing technologies.The main principle of this system is that the light emittedby objects at a distance goes through preset asphericalsurfaces, and the user can see the relatively clear imageof object. According to their simulated results, the finalimage provided for users is nearly identical to the originalimage. However, the reduced brightness and distortionin image corners are also observed. This glass that canenhance vision ability of people with presbyopia is still indesign phase.Hu et al. [26] attempted to develop a see-throughglass to assist the persons who suffer from the nyctalopia.They first analysed the vision model of night blindness andthen derived the relationship between luminance levels andRGB grey scale of the image to develop the enhancementalgorithm. Experimental results showed that the bright-ness of raw dark image could be significantly improved bythe use of proposed algorithm. After the spatial distanceand camera lens calibrations, the processed image is ableto perfectly align with the view seen by users.Apart from previous assistive glasses which are still atan engineering or concept stage, several assistive glasseshave been available on the market. These commercializedglasses for visual assistance are summarized in Table 3.Google Glass is usually used for the secondary develop-ment, and many assistive glasses not listed in our surveyare developed based on Google Glass [109], [110]. Target-ing ends of eSight 3 are VIPs, and therefore, developersplace two OLED display screens in front of user’s eyesto play processed videos. Sensors of OrCam and Intoer590Table 4Summary of Some Assistive Devices with Various FormsStudy Modality Sensor Feedback Functionality ValidationWang et al.[119]None RGB-DcameraAudition Detection of stairs,pedestrian crosswalks andtraffic signsevaluated on databasesSatue andMiah [120]None UltrasonicsensorNerve stimulation;audition; vibrationObstacle detection Tested in predefinedenvironmentsSekhar et al.[121]None StereocamerasAudition Obstacle detection Compared with the othersystemsRao et al.[122]None Laser device;RGB cameraNone Pothole and unevensurface detectionValidated by theperformance metricGharani andKarimi [123]None RGB camera None Context-aware obstacledetectionCompared with the othertwo algorithms usingdifferent performancemetricsPattanshettiet al. [128]Hat Ultrasonicsensor;GPS receiver;RGB cameraAudition; vibration Currency recognition;obstacle detection;NavigationNoneReshma [125] Belt UltrasonicsensorAudition Obstacle detection Tested by 4 blindfolded personsWattal et al.[126]Belt UltrasonicsensorAudition Obstacle detection Compared the measuredand actual distance andposition of obstacleMocanu et al.[127]Belt Ultrasonicsensor; RGBcameraAudition Obstacle detection andrecognitionTested by 21 visuallyimpaired subjectsFronemanet al. [138]Belt UltrasonicsensorVibration Obstacle detection Evaluated by variouscommon static householdobstaclesBhatlawandeet al. [129]Bracelet UltrasonicsensorAudition; vibration Way-finding; obstacledetectionTested by 2 blindfoldedpersonsRangarajanand Benslija[130]RoboticdogForce sensor;RGB cameraAudition Obstacle detection;word recognitionTested on flat groundand slopeLin et al.[131]Smartphone RGB camera Audition Obstacle detection andrecognitionTested by 4 visuallyimpaired personsLee et al.[132]Jacket Ultrasonicsensor;GPS receiver;RGB camera;magneticcompasssensorAudition; vibration Navigation; obstacledetectionTested with various deviceconfigurations in differentenvironmentsKim andSong [133]Wheelchair UltrasonicsensorNone Obstacle detection Tested at different movingspeedsAltaha andRhee [137]Cane;jacket; gloveUltrasonicsensor3D audition Obstacle detection Tested by the blind person(continued )591Table 4ContinuedMekhalfiet al. [139]Jacket Laser sensor;RGB cameraAudition Indoor scene description Tested in databasesBhatlawandeet al. [135]Bracelet;BeltUltrasonicsensor; RGBcameraAudition; vibration Obstacle detection Tested by 15 trained blindpersonsSivagamiet al. [136]Glasses; belt UltrasonicsensorAudition Obstacle detection Tested by the blindfoldedpersonsWu et al.[140]WheeledrobotsUltrasonicsensor; RGBcamera;RFID readerNone Indoor navigation Tested on the predefinedpathSpiers andDollar [141]Hand-heldcubeUWBtransmitterShape-changingtactusIndoor navigation Tested by the sightedpersonsFang et al.[134]Flashlight RGB camera;structuredlightAudition Obstacle detection Evaluated on custom-builtdatabasesare an RGB camera and an infrared binocular camera,respectively. These two products both use the audio signalas feedback to inform users. Enchroma is designed for theassistance of colour blindness. Like the study conductedby Wu et al. [108], this product achieves its functional-ity (here is colour contrast enhancement) using a speciallydesigned lens, instead of any digital processing technolo-gies. The sensing unit of BrainPort rV100 is similarto above-mentioned products, and the only difference isthat it leverages the electric stimulus feeling as feedback.Developers of BrainPort rV100 consider that the tongueis extremely sensitive to electric stimulus, and hence, theyplace the tongue array which contains 400 electrodes inthe user’s tongue. This indicates that the resolution ofBrainPort rV100 is 20×20 pixels. The intensity of stimu-lation represents the pixel intensity of the image obtainedby RGB camera. In addition, due to the low resolution oftongue array, the background of the raw image requires tobe eliminated [112] RGB camera Audition Public signrecognitionTested by some commonpublic signsHu et al. [26] RGB camera Vision Night visionenhancementEvaluated on custom-builtdatabasesVIPs and blind people. In their device, the RGB camerain smartphone captures the laser beam reflection, and thedistance from the cane to the obstacle is calculated usingactive triangulation. Through the personalized vibrationgenerated by smartphone, the user will be warned if pos-sible obstacles are located in traveling path. Furthermore,the magnitude of vibration is applied for the quantizationof distance. Results of validated experiments demonstratedthat the travel time of virtual white cane is less than thatof the traditional white cane. The assistive cane equippedwith the point laser may fail to detect the potholes and theobstacles in small and tiny size.Dang et al. [72] proposed an assistive cane using alinear laser, an RGB camera and an inertial measurementunit as sensors to classify the type of obstacle and estimatethe distance from the obstacle to the user. The inertialmeasurement unit is an electronic device that measures auser’s angular rate to determine spatial coordinate frames.The inertial sensor tracks the position of laser stripe in thenavigation coordinate frame, and the subsequent analysisof the laser point coordinates in regard to the originallaser stripe can divide obstacles into walls, stairs andblocks. The information gathered is transmitted to theuser via a simple nonspeech feedback. The performanceof this assistive cane is easily influenced by the strongillumination, thereby limiting the application scope of thisassistive cane.Due to the limited detecting or scanning range whenusing the laser as a sensor, we can only detect objectslocated in the region where the laser illuminates. To over-come this shortcoming, we need to leverage spatial infor-mation recorded by RGB camera. Majeed and Baadel[73] integrated an RGB camera with 270◦lens into anassistive cane, thus allowing us to capture much of envi-ronmental information. The proposed smart cane can helpVIPs and blind people to dodge obstacles placed at the587Table 3Summary of Some Assistive Glasses Which Are Available on the MarketName Company LaunchdateFunctionality Brief descriptionGoogleGlass [113]Google Inc. 2012 Direction recognition It is equipped with the RGB camera andgyroscope and has all the functions ofmobile phone. As feedback, it can transmit theinformation to the user via the bone-conductionearphone and display screen. Google Glass is notdesigned for visual assistance of the VIPs andblind people, but we can do secondarydevelopment based on iteSight 3 [114]eSight Co. 2017 No specific functiondescriptionIt is mainly designed for the individuals who arenot completely blind. A high speed and qualitycamera is loaded in this glass to capture whatthe user is browsing. The obtained videos arefirst subjected to image-enhancement processingand then shown in two OLED screens. From thedisplay way, eSight 3 is something like the virtualreality display deviceOrCam [115]OrCamTechnologiesLtd.2015 Text reading; facerecognition; product andmoney identificationOrCam mainly consists of the RGB camera andportable computer. It can be fixed on any eyeglassframe and informs the user outside informationvia the audio signalsEnchroma [116]Enchroma,Inc.2013 Colour contrastenhancementEnchroma is designed for the colour blindness.It does not leverage any digital processingtechnology. Enchroma alters the original wavesusing the specially designed lenses to help thepersons of colour vision deficiency see thereal colourIntoer [117] HangzhouKR-VISIONTechnologyCo., Ltd.2017 Obstacle detection;scene, money, puddle,staircase, traffic signal andzebra crossing recognition;navigationIt uses the infrared binocular camera to recordthe environmental information illuminated by thenatural and structural light. It produces thespecial encoded stereo to inform the user viathe bone-conduction earphoneBrainPort rV100 [118]Wicab, Inc. 2015 Obstacle detection;scene recognitionBrainPort rV100 is mainly composed of the RGBcamera mounted on a pair of glasses, hand-heldcontroller and tongue array containing 400electrodes. The outside information is convertedinto electrical signals that are sent to the tonguearray on the tongue of the user. Before using thisdevice, there is a training phasemaximum distance of 10 m, and moreover, it can be utilizedto recognize different persons’ faces.Ye et al. [74] used a three-dimensional (3D) camera asa sensor to develop an assistive cane, aiming to estimatingpose and recognizing obstacle. The type of 3D cameraused in their study is SwissRanger SR4000, which is asmall-sized (65 × 65 × 68 mm3) 3D time-of-flight camera.The speech feedback module serves as the communicationmedia between human and cane. This assistive canewas validated by data collected from a variety of indoorscenes. Results demonstrated that the proposed canecould estimate pose and recognize objects with satisfactoryperformance. In their article, developers stated that theywere working with orientation and mobility specialists aswell as blind trainees of the World Service for the Blind inArkansas to refine functions of their assistive cane.Apart from the laser and RGB camera, the ultrasonicsensor is one of the widely used sensors in assistive de-vice owing to its high-price/performance ratio. The ultra-sonic sensor emits ultrasonic waves in the air, and thenthe reflected sound is received by the sensor. This sen-sor is always applied for detecting objects and measuringdistance. Kumar et al. [75] developed an ultrasonic canefor aiding the blind people to navigate. This ultrasonic cane588is equipped with three pairs of ultrasonic trans-receivers,thus enabling the blind people to know aerial and groundobstacles as well as potholes in front of them via audiowarnings. The maximum working range of this ultrasoniccane is 1.5 m, which is much less than that of the canedeveloped by Majeed and Baadel.Gupta et al. [76] used an ultrasonic sensor and aGPS receiver together in classic canes. The addition ofGPS module allows VIPs and blind people to travel out-doors using satellite network. Audio signals generated byPygame module, a programming module to create gamesand animations, were used as the feedback to remind users.The range of distance measured by the attached ultrasonicsensor in cane is from 0.05 to 2 m, which is slightly largerthan that of the device developed by Kumar et al.Several investigators reported that they used an ultra-sonic sensor to establish assistive canes. Shah et al. [77]arranged four ultrasonic sensors in a stick. Among theseultrasonic sensors, three ultrasonic sensors are applied forobstacle detection and the remaining one for pothole de-tection. Their experimental results showed that maximumdetection distances of the ultrasonic stick were 1.45, 0.6and 0.82 m when the obstacles located on the front, left-front and right-front, respectively. A similar smart stickwas reported by Sharma et al. [78]. They stated that thissmart stick was able to perceive obstacles of any height infront of or slightly sideways to users. Bay Advanced Tech-nologies Ltd. [79] developed an ultrasonic sensor-basedassistive cane named ‘K’ Sonar, and this cane was availableon the market.Infrared sensor is also a very popular sensor selectedby investigators for the development of the smart cane. Itis an electronic sensor, which works by using a specific lightsensor to detect a selected light wavelength in the infraredspectrum. This sensor can detect infrared light radiatingfrom objects in its view field to detect object and measuredistance. Buchs et al. [80] mounted two infrared sensorson a white cane. One infrared sensor was parallel to thehorizontal plane while the other was approximately 42◦with respect to the horizontal plane. Such arrangement ofinfrared sensors allows this smart cane to detect waist-upobstacles. The detection range of this cane is only 1.5 m.The addition of RGB camera can increase the detectionrange of developed smart cane. Krishnan et al. [81] appliedan ultrasonic sensor and an RGB camera in the sensingmode of smart cane, and the testing result demonstratedthat the maximum detection range was 3 m.Infrared sensor is usually used in conjunction withother types of sensors to form the multi-mode sensingarray. Niitsu et al. [82] put four sensors viz. ultra-sonic sensor, infrared sensor, compass and tri-axial ac-celerometer together on a classic cane. In this smart cane,a bone-conduction headphone was used for human–caneinteraction in such a way that the feedback informationcould be passed to users unobtrusively. This assistivecane based on multi-mode sensing array can achieve thedetection accuracy of 100% for wide obstacles, crossingand approaching persons, while 95% for thin obstacles.It should be noted that the bone conduction may haveinterference with several brain functions. Scherlen et al.[83] leveraged an infrared sensor, a brilliance sensor and awater sensor in combination to develop a smart cane named‘RecognizeCane’, which was capable of recognizing objectsand their constituent materials. At present, four mate-rials, namely metal (steel), glass, cardboard and plastic,can be successfully recognized. Also, the ‘RecognizeCane’can distinguish the zebra crossing and water puddle usingbrilliance and water sensors, respectively. The brilliancesensor was also adopted by Kim et al. [84] in their smartcane to measure environmental brightness information. Todetect obstacles in front accurately, two antennas used asthe contact sensors, an ultrasonic sensor and an infraredsensor, were attached to a sensing unit of a smart cane byShim and Yoon [85]. With the aid of contact sensors, thissmart cane can effectively complement for ultrasonic andinfrared sensors for detection of short-range obstacles.Fan et al. [86], respectively, applied an RGB-D cameraand an ultrasonic sensor to acquire dynamic visual envi-ronmental information and detect obstacles around. TheRGB-D camera is able to obtain synchronized videos ofboth colour and depth. To implement outdoor navigation,they added a GPS module into the sensing unit. Results ofvalidation experiments conducted in the open area demon-strated that the assistive cane installed in this sensingunit can help VIPs and blind people to travel outdoorssafely. However, this cane cannot process the image datacaptured by RGB-D camera in real time. Takizawa et al.[87] also used an RGB-D camera in their sensing unit,and they called this developed cane as the Kinect cane.By the use of RGB-D camera, the Kinect cane can rec-ognize different types of indoor obstacles, including chair,staircase and floor. Two blindfolded persons were invitedto test the performance of proposed cane, and obtainedresults showed that the average search time by Kinectcane was significantly shorter than that by classic whitecane.Some other sensors are also used in sensing unit ofassistive cane. Kassim et al. [88] mounted radio frequencyidentification (RFID) transponders on the floor and theninstalled an RFID reader at the end of cane. RFID is atechnology that records the presence of an object usingradio signals. When walking, the RFID reader reads RFIDtags arranged on the floor in advance, and the addressesof these tags are sent for map processing. Subsequently,the auditory interface emits voice commands such as 90◦turn left after digital compass calibration. Results ofsmall-sample experiment containing two human subjectsshowed that the RFID-based smart cane has a potentialto help VIPs and blind people to walk independently inindoor environments. Frequency-modulated continuouswave (FMCW) radars and antennas were housed in a classicwhite cane by Pisa et al. [89] for obstacle detection. Theresult showed that this cane could receive reflections froma metallic panel up to 5 m. FMCW radar is a short-rangemeasuring radar set capable of determining the distance ofobject in its view field.The assistive cane belongs to the portable assistivedevice. It is compact and lightweight, thus it is easilytaken by users. Despite these advantages, the assistivecane needs to interact with users constantly.5895.2 Vision Substitution by Assistive GlassesAssistive glass is one of the wearable assistive devices. InTable 2, some assistive glasses designed for VIPs and blindpeople are presented. The assistive glass in general fixessensing and feedback modules on a classic glass. Unlike theassistive cane, the assistive glass in general uses the visualsignal as the feedback for users.Sadi et al. [98] embedded an ultrasonic sensor ina traditional glass to develop a smart glass for walkingassistance. The sensing region of attached ultrasonic sensorcovers 3 m distance and 60◦angle. Processed informationthat corresponds to the distance of obstacle is sent tousers via audio signals. Validation experiments carried outin the lab showed that detection accuracies of proposedglass were all beyond 93%. Kassim et al. [99] comparedthe performance of three sensors inclusive of an ultrasonicsensor, an infrared sensor and a laser range by takingseveral metrics such as accuracy, size and weight intoaccount. Finally, they selected ultrasonic sensors for thedevelopment of their assistive glass. As feedback, twowarning modes viz. audition and vibration were designedin their device and users could switch the warning modebased on her or his preference or environment around.Kassim et al. gave an example: when a user comes to anoisy environment such as bus terminal or market, he or shecan use the vibration mode instead of auditory mode, thusallowing the audio sense to hear ambient sounds. A blindspot evaluation experiment demonstrated the effectivenessof proposed smart glass.Except for the ultrasonic sensor, the RGB camera isalso commonly used in the sensing unit of assistive glass,and there are four publications that used RGB cameras toobtain outside information in Table 2. Yi and Tian [100]applied an RGB camera equipped on a glass for assistingVIPs to access text information in their daily lives. Theyreported that the further study should focus on improvingthe detection accuracy of scene text hidden in clutteredbackground. One possible solution for this is to exploremore effective feature representations to establish morerobust models, and subsequently, we write the obtainedmodel into a processing unit of smart glass. A similarresearch was conducted by Hassan and Tang [101]. Theirsmart glass is only suitable for recognizing the text onhardcopy materials. Inspired by the principle of humanvisual perception, Everding et al. [102] deployed two RGBcameras on a classic glass to imitate two human retinas.The performance of their smart glass is satisfactory whensubjects are static. For moving tests, the performance isstill unknown. Wang et al. [103] embedded a saliency mapalgorithm into an RGB camera-based smart glass for thedetection of indoor signs. Experimental results on theirdatabases containing indoor signs and doors showed theusability of their glass. The output information of fourabovementioned publications is all delivered to users usingthe audio form.Pundlik et al. [104] did the secondary development forGoogle Glass to magnify the screen content of smartphone,thereby helping VIPs to easily access information displayedon the screen. They invited eight sighted and four VIP toemploy calculator and music player apps on smartphonewith the aid of proposed glass and built-in screen zoom appof phone. Comparison results showed that the assistiveglass based on Google Glass outperformed the built-inscreen zoom software in improving the ability of VIPs toread screen content.As the RGB-D camera can acquire both colour anddistance information, it has been widely used in assistiveglass. Neto et al. [105] directly tied a Microsoft Kinectsensor to the user’s head, and this assistive device informedthe user outside information via 3D audio signal. Thishardware architecture is somewhat abrupt. The similarhardware framework was adopted by Stoll et al. [106]. Af-ter validation experiments on 21 blindfolded young adultswith 1-week interval, they deemed that this system waspromising for indoor use, but still inefficient for outdoorscenarios. Hicks et al. [107] improved the hardwarearchitecture and made it more like glass. They convertedscene data obtained by RGB-D camera into a depth mapthat nearby objects were rendered into brighter. Subse-quently, processed depth images were displayed on twoOLED screens. With the validation experiment, for VIPs,the average detection distance was approximately 3 m.Hence, further work needs to be done for increasing thedetection distance of objects. The possible solution to thisis to change the mechanical architecture of the glasses asthe see-through display.Wu et al. [108] designed a compact see-through near-eye display system that could be used for the personswho are hyperopic. Unlike most assistive devices, thissystem does not use any digital processing technologies.The main principle of this system is that the light emittedby objects at a distance goes through preset asphericalsurfaces, and the user can see the relatively clear imageof object. According to their simulated results, the finalimage provided for users is nearly identical to the originalimage. However, the reduced brightness and distortionin image corners are also observed. This glass that canenhance vision ability of people with presbyopia is still indesign phase.Hu et al. [26] attempted to develop a see-throughglass to assist the persons who suffer from the nyctalopia.They first analysed the vision model of night blindness andthen derived the relationship between luminance levels andRGB grey scale of the image to develop the enhancementalgorithm. Experimental results showed that the bright-ness of raw dark image could be significantly improved bythe use of proposed algorithm. After the spatial distanceand camera lens calibrations, the processed image is ableto perfectly align with the view seen by users.Apart from previous assistive glasses which are still atan engineering or concept stage, several assistive glasseshave been available on the market. These commercializedglasses for visual assistance are summarized in Table 3.Google Glass is usually used for the secondary develop-ment, and many assistive glasses not listed in our surveyare developed based on Google Glass [109], [110]. Target-ing ends of eSight 3 are VIPs, and therefore, developersplace two OLED display screens in front of user’s eyesto play processed videos. Sensors of OrCam and Intoer590Table 4Summary of Some Assistive Devices with Various FormsStudy Modality Sensor Feedback Functionality ValidationWang et al. [119]None RGB-DcameraAudition Detection of stairs,pedestrian crosswalks andtraffic signsevaluated on databasesSatue andMiah [120]None UltrasonicsensorNerve stimulation;audition; vibrationObstacle detection Tested in predefinedenvironmentsSekhar et al. [121]None StereocamerasAudition Obstacle detection Compared with the othersystemsRao et al. [122]None Laser device;RGB cameraNone Pothole and unevensurface detectionValidated by theperformance metricGharani andKarimi [123]None RGB camera None Context-aware obstacledetectionCompared with the othertwo algorithms usingdifferent performancemetricsPattanshettiet al. [128]Hat Ultrasonicsensor;GPS receiver;RGB cameraAudition; vibration Currency recognition;obstacle detection;NavigationNoneReshma [125] Belt UltrasonicsensorAudition Obstacle detection Tested by 4 blindfolded personsWattal et al.[126]Belt UltrasonicsensorAudition Obstacle detection Compared the measuredand actual distance andposition of obstacleMocanu et al.[127]Belt Ultrasonicsensor; RGBcameraAudition Obstacle detection andrecognitionTested by 21 visuallyimpaired subjectsFronemanet al. [138]Belt UltrasonicsensorVibration Obstacle detection Evaluated by variouscommon static householdobstaclesBhatlawandeet al. [129]Bracelet UltrasonicsensorAudition; vibration Way-finding; obstacledetectionTested by 2 blindfoldedpersonsRangarajanand Benslija[130]RoboticdogForce sensor;RGB cameraAudition Obstacle detection;word recognitionTested on flat groundand slopeLin et al.[131]Smartphone RGB camera Audition Obstacle detection andrecognitionTested by 4 visuallyimpaired personsLee et al.[132]Jacket Ultrasonicsensor;GPS receiver;RGB camera;magneticcompasssensorAudition; vibration Navigation; obstacledetectionTested with various deviceconfigurations in differentenvironmentsKim andSong [133]Wheelchair UltrasonicsensorNone Obstacle detection Tested at different movingspeedsAltaha andRhee [137]Cane;jacket; gloveUltrasonicsensor3D audition Obstacle detection Tested by the blind person(continued )591Table 4ContinuedMekhalfiet al. [139]Jacket Laser sensor;RGB cameraAudition Indoor scene description Tested in databasesBhatlawandeet al. [135]Bracelet;BeltUltrasonicsensor; RGBcameraAudition; vibration Obstacle detection Tested by 15 trained blindpersonsSivagamiet al. [136]Glasses; belt UltrasonicsensorAudition Obstacle detection Tested by the blindfoldedpersonsWu et al.[140]WheeledrobotsUltrasonicsensor; RGBcamera;RFID readerNone Indoor navigation Tested on the predefinedpathSpiers andDollar [141]Hand-heldcubeUWBtransmitterShape-changingtactusIndoor navigation Tested by the sightedpersonsFang et al.[134]Flashlight RGB camera;structuredlightAudition Obstacle detection Evaluated on custom-builtdatabasesare an RGB camera and an infrared binocular camera,respectively. These two products both use the audio signalas feedback to inform users. Enchroma is designed for theassistance of colour blindness. Like the study conductedby Wu et al. [108], this product achieves its functional-ity (here is colour contrast enhancement) using a speciallydesigned lens, instead of any digital processing technolo-gies. The sensing unit of BrainPort rV100 is similarto above-mentioned products, and the only difference isthat it leverages the electric stimulus feeling as feedback.Developers of BrainPort rV100 consider that the tongueis extremely sensitive to electric stimulus, and hence, theyplace the tongue array which contains 400 electrodes inthe user’s tongue. This indicates that the resolution ofBrainPort rV100 is 20×20 pixels. The intensity of stimu-lation represents the pixel intensity of the image obtainedby RGB camera. In addition, due to the low resolution oftongue array, the background of the raw image requires tobe eliminated [111].5.3 Vision Substitution by Other Forms of AssistiveDevicesTable 4 summarizes some assistive devices with variousforms except for canes and glasses.Several investigators only provide a core component ofassistive device. By the use of an RGB-D image, Wanget al. [119] developed an imaging processing algorithm-based Hough transform for detection and recognition ofstairs, pedestrian crosswalks and traffic signals. Resultstested on their RGB-D databases showed the effectivenessof this system. Satue and Miah [120] applied an ultra-sonic sensor to detect obstacles and then combined theelectric stimulus, audition and vibration to warn the blindpeople of dangerous situations. As feedback, they placedthe nerve stimulator unit on the wrist, and this unit wouldgive an electric shock below the safe limit of human nervestimulation according to the distance of obstacle. Sekharet al. [121] used a real-time stereo vision algorithm writtenin FPGA to detect obstacles. A matching algorithm calledzero-mean sun of absolute differences can maximize thehardware utilization, and therefore, their system is appli-cable to real-time applications. Rao et al. [122] combineda laser and an RGB camera in their assistive system to re-alize the pothole and uneven surface detection. From theirstudy, we find that the laser can be served as the structurallight for detecting various obstacles. Gharani and Karimi[123] calculated the optical flow between two consecutiveRGB images and extracted feature points based on thetexture of object and movement of the user. Experimentalresults showed that the combined use of optical flow andpoint track algorithms was capable of detecting both mov-ing and stationary obstacles which were close to the RGBcamera.There existed the assistive devices in the othermodalities:Belt is a widely used modality for assistive device [125] Belt UltrasonicsensorAudition Obstacle detection Tested by 4 blindfolded personsWattal et al. [126]Belt UltrasonicsensorAudition Obstacle detection Compared the measuredand actual distance andposition of obstacleMocanu et al. [128]Hat Ultrasonicsensor;GPS receiver;RGB cameraAudition; vibration Currency recognition;obstacle detection;NavigationNoneReshma [125] Belt UltrasonicsensorAudition Obstacle detection Tested by 4 blindfolded personsWattal et al.[126]Belt UltrasonicsensorAudition Obstacle detection Compared the measuredand actual distance andposition of obstacleMocanu et al.[127]Belt Ultrasonicsensor; RGBcameraAudition Obstacle detection andrecognitionTested by 21 visuallyimpaired subjectsFronemanet al. [138]Belt UltrasonicsensorVibration Obstacle detection Evaluated by variouscommon static householdobstaclesBhatlawandeet al. [129]Bracelet UltrasonicsensorAudition; vibration Way-finding; obstacledetectionTested by 2 blindfoldedpersonsRangarajanand Benslija [130]RoboticdogForce sensor;RGB cameraAudition Obstacle detection;word recognitionTested on flat groundand slopeLin et al. [131]Smartphone RGB camera Audition Obstacle detection andrecognitionTested by 4 visuallyimpaired personsLee et al. [132]Jacket Ultrasonicsensor;GPS receiver;RGB camera;magneticcompasssensorAudition; vibration Navigation; obstacledetectionTested with various deviceconfigurations in differentenvironmentsKim andSong [133]Wheelchair UltrasonicsensorNone Obstacle detection Tested at different movingspeedsAltaha andRhee [137]Cane;jacket; gloveUltrasonicsensor3D audition Obstacle detection Tested by the blind person(continued )591Table 4ContinuedMekhalfiet al. [139]Jacket Laser sensor;RGB cameraAudition Indoor scene description Tested in databasesBhatlawandeet al. [135]Bracelet;BeltUltrasonicsensor; RGBcameraAudition; vibration Obstacle detection Tested by 15 trained blindpersonsSivagamiet al. [136]Glasses; belt UltrasonicsensorAudition Obstacle detection Tested by the blindfoldedpersonsWu et al.[140]WheeledrobotsUltrasonicsensor; RGBcamera;RFID readerNone Indoor navigation Tested on the predefinedpathSpiers andDollar [141]Hand-heldcubeUWBtransmitterShape-changingtactusIndoor navigation Tested by the sightedpersonsFang et al. [135]Bracelet;BeltUltrasonicsensor; RGBcameraAudition; vibration Obstacle detection Tested by 15 trained blindpersonsSivagamiet al. [138]Belt UltrasonicsensorVibration Obstacle detection Evaluated by variouscommon static householdobstaclesBhatlawandeet al. [129]Bracelet UltrasonicsensorAudition; vibration Way-finding; obstacledetectionTested by 2 blindfoldedpersonsRangarajanand Benslija[130]RoboticdogForce sensor;RGB cameraAudition Obstacle detection;word recognitionTested on flat groundand slopeLin et al.[131]Smartphone RGB camera Audition Obstacle detection andrecognitionTested by 4 visuallyimpaired personsLee et al.[132]Jacket Ultrasonicsensor;GPS receiver;RGB camera;magneticcompasssensorAudition; vibration Navigation; obstacledetectionTested with various deviceconfigurations in differentenvironmentsKim andSong [133]Wheelchair UltrasonicsensorNone Obstacle detection Tested at different movingspeedsAltaha andRhee [137]Cane;jacket; gloveUltrasonicsensor3D audition Obstacle detection Tested by the blind person(continued )591Table 4ContinuedMekhalfiet al. [139]Jacket Laser sensor;RGB cameraAudition Indoor scene description Tested in databasesBhatlawandeet al. [135]Bracelet;BeltUltrasonicsensor; RGBcameraAudition; vibration Obstacle detection Tested by 15 trained blindpersonsSivagamiet al. [136]Glasses; belt UltrasonicsensorAudition Obstacle detection Tested by the blindfoldedpersonsWu et al. [140]WheeledrobotsUltrasonicsensor; RGBcamera;RFID readerNone Indoor navigation Tested on the predefinedpathSpiers andDollar [141]Hand-heldcubeUWBtransmitterShape-changingtactusIndoor navigation Tested by the sightedpersonsFang et al.[134]Flashlight RGB camera;structuredlightAudition Obstacle detection Evaluated on custom-builtdatabasesare an RGB camera and an infrared binocular camera,respectively. These two products both use the audio signalas feedback to inform users. Enchroma is designed for theassistance of colour blindness. Like the study conductedby Wu et al. [108], this product achieves its functional-ity (here is colour contrast enhancement) using a speciallydesigned lens, instead of any digital processing technolo-gies. The sensing unit of BrainPort rV100 is similarto above-mentioned products, and the only difference isthat it leverages the electric stimulus feeling as feedback.Developers of BrainPort rV100 consider that the tongueis extremely sensitive to electric stimulus, and hence, theyplace the tongue array which contains 400 electrodes inthe user’s tongue. This indicates that the resolution ofBrainPort rV100 is 20×20 pixels. The intensity of stimu-lation represents the pixel intensity of the image obtainedby RGB camera. In addition, due to the low resolution oftongue array, the background of the raw image requires tobe eliminated [111].5.3 Vision Substitution by Other Forms of AssistiveDevicesTable 4 summarizes some assistive devices with variousforms except for canes and glasses.Several investigators only provide a core component ofassistive device. By the use of an RGB-D image, Wanget al. [119] developed an imaging processing algorithm-based Hough transform for detection and recognition ofstairs, pedestrian crosswalks and traffic signals. Resultstested on their RGB-D databases showed the effectivenessof this system. Satue and Miah [120] applied an ultra-sonic sensor to detect obstacles and then combined theelectric stimulus, audition and vibration to warn the blindpeople of dangerous situations. As feedback, they placedthe nerve stimulator unit on the wrist, and this unit wouldgive an electric shock below the safe limit of human nervestimulation according to the distance of obstacle. Sekharet al. [121] used a real-time stereo vision algorithm writtenin FPGA to detect obstacles. A matching algorithm calledzero-mean sun of absolute differences can maximize thehardware utilization, and therefore, their system is appli-cable to real-time applications. Rao et al. [122] combineda laser and an RGB camera in their assistive system to re-alize the pothole and uneven surface detection. From theirstudy, we find that the laser can be served as the structurallight for detecting various obstacles. Gharani and Karimi[123] calculated the optical flow between two consecutiveRGB images and extracted feature points based on thetexture of object and movement of the user. Experimentalresults showed that the combined use of optical flow andpoint track algorithms was capable of detecting both mov-ing and stationary obstacles which were close to the RGBcamera.There existed the assistive devices in the othermodalities:Belt is a widely used modality for assistive device [124].Reshma [125] furnished five ultrasonic sensors around thebelt. This spatial arrangement of sensors allowed us todetect obstacles within the circle of 5 m in diameter.A similar assistive belt was reported by Wattal et al.[126] and the maximum detection distance was also 5 m.Mocanu et al. [127] used one RGB camera and fourultrasonic sensors in their visual assistive belt. A totalof 21 VIPs were involved in the evaluation experiment,and results demonstrated that the developed assistive beltcould recognize both static and moving objects in highlydynamic urban scenes. Besides, each subject expressed agood experience.Pattanshetti et al. [128] developed an assistive hat,which consisted of an ultrasonic sensor and an RGBcamera for obstacle detection and currency identification,592respectively. To achieve the outdoor navigation, theyleveraged a GPS module in mobile phone.Bhatlawande et al. [129] developed an ultrasonicbracelet for independent mobility of VIPs and blind people.With on-demand hand movements, this bracelet can warnthe user of the obstacles in the range from 0.2 to 6 m.Alerting signals were then sent to users via audition andvibration.Rangarajan and Benslija [130] reported a voice recogni-tion robotic dog that could guide VIPs and blind people tothe destination avoiding obstacles and traffic. This roboticdog had been successfully tested on the flat ground andslope. Lin et al. [131] directly used a built-in RGB cameraof smartphone to detect and recognize obstacles. However,the recognition accuracy of obstacle in their study was only60%. In the real world, this is insufficient for VIPs andblind people to avoid obstacles around them.Lee et al. [132] put an ultrasonic sensor array, a GPSreceiver, an RGB camera and a magnetic compass sensoron the jacket to help VIPs and blind people to traveloutdoors. This assistive jacket had been tested with variousdevice configurations in different environments, and resultsdemonstrated that the sensor and receiver network had apotential ability to guarantee the safe outdoor navigation.Kim and Song [133] extended the functionality of aclassic wheelchair by adding multiple ultrasonic sensors,and the wheelchair can therefore execute efficient obsta-cle searching. The excellent performance had been ob-served when the updated wheelchair was tested at differentmoving speeds.An assistive flashlight was designed by Fang et al.,who used an RGB camera and a structured light generatedby a laser array to detect obstacles [134]. The laser of highrefresh rate was used to achieve a visual bifurcation effectso that people around could not perceive the laser light butthe camera could capture it. Therefore, the flashlight canoperate in an unobtrusive pattern.To further improve the performance of assistive device,some investigators simultaneously used several modalitiesof assistive devices to reach the specific assistive purposes.Bhatlawande et al. [135] installed an RGB camera and anultrasonic sensor on a belt and a bracelet, respectively, forassisting the blind people in walking. Based on results ofevaluation experiment with 15 blind people, the dual-modeassistive device exhibited excellent performance: 93.33%participants expressed satisfaction, 86.66% comprehendedits operational convenience and 80% appreciated the com-fort of the system. Sivagami et al. [136] also developeddual-mode assistive devices containing two modalities viz.glasses and a belt for VIPs and blind people to travel underunknown circumstances. Altaha and Rhee [137] proposedthree different modalities viz. jacket, glove and cane forobstacle detection. They arranged three ultrasonic sensorson the front, left and right sides, respectively, thus allow-ing us not only to detect the presence of nearby objectsbut also to measure the distance of objects from users.We suggest that they can in future use these three assistivedevices in combination to increase the detection range anddistance.6. Conclusion and ProspectiveAlthough numerous assistive devices are available, theyare not yet effectively adopted by VIPs and blind people.One reason is that these assistive devices can only act ina restricted spatial range due to their limited sensors andfeedback modes. The other reason is that the performanceof these assistive devices is not effectively validated. Asshown in the aforementioned tables, in many cases, onlyblindfolded sighted subjects were invited to validation ex-periments. Actually, cognitive strategies observed in VIPsand blind people are significantly different from those inblindfolded sighted subjects.In this section, we will next discuss three prospectivesfor assistive devices to conclude this survey: (1) increasethe diversity of input and output information to guaranteethe reliability of assistive device, (2) develop the assis-tive device based on perception mechanism and behaviourpattern of VIPs and blind people and (3) design morereliable experiments to validate the feasibility of assistivedevice.The diversity of feedback can increase the reliability offinal assistive devices. The multimodal feedback, includ-ing audition, thermal and vibration was embedded intothe virtual reality system, which allows VIPs and blindpeople to explore and navigate inside virtual environments[30]. Simultaneously, the use of sensor fusion frameworkfor assistive device allows us to obtain more importantinformation about the surrounding environment. Rizzoet al. [142] found that the depth information extractedfrom a stereoscopic camera system could ignore specificpotential collision hazards, and the addition of infraredsensors could offer a reliable distance measurement to re-move this inconsistency of depth inferred from stereo im-ages. Hence, for the specific task, if used sensors giveinconsistent measurements, the alternate sensing modalitycan be chosen to remedy this inconsistency.Study of changes in the connectivity of the functionalareas of the human brain can help us understand thechange in perception mechanism of VIPs and blind people [143]. Because congenitally blind people rely more on audi-tion or tactus information, the connectivity of multisensorybrain areas of them will be more complicated [144]. There-fore, the introduction of brain imaging is essential for thedesign of assistive devices. Luckily, there are some reviewsavailable in a recent special issue of ‘Neuroscience andBiobehavioral Reviews’ that cover the spectrum of SSDsand their relevance for understanding the human brain(http://www.sciencedirect.com/science/journal/01497634/41). In addition, we can develop better assistive devicesaccording to the idea of bionics [145].Currently, the performance of assistive devices is rarelyor inadequately validated by VIPs and blind individu-als. As cognitive strategies of VIPs and sighted peopleare significantly different, it is not guaranteed that theperformance validated by sighted blindfolded people rep-resents that by VIPs and blind people [69]. Therefore, it isvery necessary to invite numerous VIPs and blind peoplefrom different blind associations to test the performanceof developed assistive device. Furthermore, real-world593scenarios are far more complicated, and testing environ-ments should fully cover any possible application scenario.AcknowledgementThis work was sponsored by the Shanghai Sailing Pro-gram (No. 19YF1414100), the National Natural ScienceFoundation of China (No. 61831015, No. 61901172), theSTCSM (No. 18DZ2270700), and the China PostdoctoralScience Foundation funded project (No. 2016M600315).The authors would also like to acknowledge Ms. HuijingHuang, Ms. Shuping Li, and Mr. Joel Disu for providingassistance with the English language revision.References[1] World Health Organization, Visual impairment and blindness(2017). Available from: http://www.who.int/mediacentre/factsheets/fs282/en/.[2] M. Gori, G. Cappagli, A. Tonelli, G. Baud-Bovy, andS. Finocchietti, Devices for visually impaired people: Hightechnological devices with low user acceptance and no adapt-ability for children, Neuroscience & Biobehavioral Reviews,69(Supplement C), 2016, 79–88.[3] T. Nakamura, Quantitative analysis of gait in the visuallyimpaired, Disability & Rehabilitation, 19(5), 1997, 194–197.[4] A. Bhowmick and S.M. Hazarika, An insight into assistivetechnology for the visually impaired and blind people: state-of-the-art and future trends, Journal on Multimodal UserInterfaces, 11(2), 2017, 149–172.[5] M.C. Domingo, An overview of the Internet of Things forpeople with disabilities, Journal of Network and ComputerApplications, 35(2), 2012, 584–596.[6] D. Dakopoulos and N.G. Bourbakis, Wearable obstacle avoid-ance electronic travel aids for blind: A survey, IEEE Trans-actions on Systems Man & Cybernetics Part C, 40(1), 2009,25–35.[7] J.M. Batterman, V.F.Martin, D. Yeung, and B.N. Walker,Connected cane: Tactile button input for controlling ges-tures of iOS voiceover embedded in a white cane, AssistiveTechnology, 30(2), 2018, 91–99.[8] J.R. Terven, J. Salas, and B. Raducanu, New opportunitiesfor computer vision-based assistive technology systems forthe visually impaired, Computer, 47(4), 2014, 52–58.[9] R. Vel´azquez, Wearable assistive devices for the blind, LectureNotes in Electrical Engineering, 75, 2016, 331–349.[10] W. Elmannai and K. Elleithy, Sensor-based assistive devicesfor visually-impaired people: current status, challenges, andfuture directions, Sensors, 17(3), 2017, 565.[11] P.M. Lewis, L.N. Ayton, R.H. Guymer, et al., Advancesin implantable bionic devices for blindness: A review, ANZJournal of Surgery, 86(9), 2016, 654–659.[12] L. Renier and A.G.D. Volder, Sensory substitution devices(New York, USA: Oxford Handbooks, 2013).[13] G. Motta, T. Ma, K. Liu, et al., Overview of smart whitecanes: connected smart cane from front end to back end,in R. Velazquez (ed.), Mobility of visually impaired people(Cham: Springer, 2018), 469–535.[14] W. Zhang, Y. Lin, and N. Sinha, On the function-behavior-structure model for design, Proceedings of the CanadianEngineering Education Association, 2005, 1–8.[15] W. Zhang and J. Wang, Design theory and methodologyfor enterprise systems, Enterprise Information Systems, 10,2016, 245–248.[16] Z.M. Zhang, Q. An, J.W. Li, and W.J. Zhang, Piezoelec-tric friction–inertia actuator—A critical review and futureperspective, The International Journal of Advanced Manu-facturing Technology, 62(5–8), 2012, 669–685.[17] Y. Lin, Towards Intelligent Human–Machine Interactions:Human Assistance Systems (HAS), ASME Magazine SpecialIssue on Human-Machine Interactions, 2017, 139(06), 4–8.[18] D.I. Anderson, J.J. Campos, D.C. Witherington, et al., Therole of locomotion in psychological development, Frontiers inPsychology, 4(2), 2013, 1–7.[19] A. Mihailovic, B.K. Swenor, D.S. Friedman, S.K. West,L.N. Gitlin, and P.Y. Ramulu, Gait implications of visualfield damage from glaucoma, Translational Vision Science &Technology, 6(3), 2017, 23.[20] K.A. Turano, D.R. Geruschat, F.H. Baker, J.W. Stahl, andM.D. Shapiro, Direction of gaze while walking a simpleroute: persons with normal vision and persons with retini-tis pigmentosa, Optometry & Vision Science, 78(9), 2001,667–675.[21] P.A. Aspinall, S. Borooah, C. Al Alouch, et al., Gaze andpupil changes during navigation in age-related macular de-generation, British Journal of Ophthalmology, 98(10), 2014,1393–1397.[22] A. Pasqualotto and M.J. Proulx, The role of visual experiencefor the neural basis of spatial cognition, Neuroscience &Biobehavioral Reviews, 36(4), 2012, 1179–1187.[23] A. Pasqualotto, J.S.Y. Lam, and M.J. Proulx, Congeni-tal blindness improves semantic and episodic memory, Be-havioural Brain Research, 244(Supplement C), 2013, 162–165.[24] E. Peli and J.-H. Jung, Multiplexing prisms for field expansion,Optometry and Vision Science, 94(8), 2017, 817–829.[25] A.D. Hwang and E. Peli, An augmented-reality edge enhance-ment application for Google Glass, Optometry and VisionScience: Official Publication of the American Academy ofOptometry, 91(8), 2014, 1021–1030.[26] C. Hu, G. Zhai, and D. Li. An Augmented-Reality nightvision enhancement application for see-through glasses, IEEEInternational Conference on Multimedia & Expo Workshops,2015.[27] E.M. Schmidt, M.J. Bak, F.T. Hambrecht, C.V. Kufta, D.K.O’rourke, and P. Vallabhanath, Feasibility of a visual pros-thesis for the blind based on intracortical micro stimulationof the visual cortex, Brain, 119(2), 1996, 507–522.[28] I. B´okkon, Phosphene phenomenon: A new concept, Biosys-tems, 92(2), 2008, 168–174.[29] P.M. Lewis, H.M. Ackland, A.J. Lowery, and J.V. Rosenfeld,Restoration of vision in blind individuals using bionic devices:A review with a focus on cortical visual prostheses, BrainResearch, 1595, 2015, 51–73.[30] A. L´ecuyer, P. Mobuchon, C. M´egard, J. Perret, C. Andriot,and J.-P. Colinot, HOMERE: A multimodal system for visu-ally impaired people to explore virtual environments, IEEEVirtual Reality, 2003, 251–258.[31] S. Wong, Traveling with blindness: a qualitative space-timeapproach to understanding visual impairment and urbanmobility, Health & Place, 49, 2018, 85–92.[32] E.T. Hall, The hidden dimension, Hidden Dimension, 6(1),1966, 94.[33] P. Strumillo, Electronic interfaces aiding the visually im-paired in environmental access, mobility and navigation, IEEE3rd International Conference on Human System Interaction,2010, 17–24.[34] B. Tversky, Spatial Intelligence: Why It Matters from BirthThrough the Lifespan (New York, USA: Routledge, 2017).[35] B. Tversky, On abstraction and ambiguity, in J.S. Gero (ed.),Studying visual and spatial reasoning for design creativity(Dordrecht: Springer Netherlands, 2015), 215–223.[36] A. Pasqualotto, M.J. Spiller, A.S. Jansari, and M.J. Proulx,Visual experience facilitates allocentric spatial representation,Behavioural Brain Research, 236, 2013, 175–179.[37] A. Pasqualotto and T. Esenkaya, Sensory substitution: thespatial updating of auditory scenes “Mimics the spatial up-dating of visual scenes, Frontiers in Behavioral Neuroscience,10, 2016, 79.[38] B. R¨oder, F. R¨osler, and H.J. Neville, Event-related potentialsduring auditory language processing in congenitally blind andsighted people, Neuropsychologia, 38(11), 2000, 1482–1502.[39] B. R¨oder, F. R¨osler, and H.J. Neville, Auditory memory incongenitally blind adults: a behavioral-electrophysiologicalinvestigation. Cognitive Brain Research, 11(2), 2001, 289–303.[40] H. Siamian, M. Hassanzadeh, F. Nooshinfard, and N. Hariri,Information seeking behavior in blind people of Iran: A survey594based on various experiences faced by them, Health Sciences,3(4), 2016, 1–5.[41] A.J. Kolarik, R. Raman, B.C. Moore, S. Cirstea,S. Gopalakrishnan, and S. Pardhan, Partial visual loss affectsself-reports of hearing abilities measured using a modifiedversion of the speech, spatial, and qualities of hearingquestionnaire, Frontiers in Psychology, 8, 2017, 1–16.[42] A.J. Kolarik, A.C. Scarfe, B.C.J. Moore, and S. Pardhan,Blindness enhances auditory obstacle circumvention: As-sessing echolocation, sensory substitution, and visual-basednavigation, PLoS One, 12(4), 2017, e0175750.[43] A.C. Livingstone, G.J. Christie, R.D. Wright, andJ.J. Mcdonald, Signal enhancement, not active suppression,follows the contingent capture of visual attention, Jour-nal of Experimental Psychology: Human Perception andPerformance, 43(2), 2017, 219–224.[44] P. Voss, Auditory spatial perception without vision, Frontiersin Psychology, 7, 2016, 1–7.[45] C. Lane, S. Kanjlia, H. Richardson, A. Fulton, A. Omaki, andM. Bedny, Reduced left lateralization of language in congen-itally blind individuals, Journal of Cognitive Neuroscience,29(1), 2016, 1–14.[46] K.J. Price, M. Lin, J. Feng, R. Goldman, A. Sears, andJ. Jacko, Nomadic speech-based text entry: A decision modelstrategy for improved speech to text processing, Interna-tional Journal of Human–Computer Interaction, 25(7), 2009,692–706.[47] R.J. Lutz, Prototyping and evaluation of landcons, ACMSIGACCESS Accessibility & Computing, 86, 2006, 8–11.[48] J. Kostiainen, C. Erkut, and F.B. Piella, Design of anaudio-based mobile journey planner application, InternationalAcademic Mindtrek Conference: Envisioning Future MediaEnvironments, 2011, 107–113.[49] M. Jeon and B.N. Walker, Spindex (speech index) improvesauditory menu acceptance and navigation performance, ACMTransactions on Accessible Computing, 3(3), 2011, 1–26.[50] B.K. Davison, Menu navigation with in-vehicle technolo-gies: Auditory menu cues improve dual task performance,preference, and workload, International Journal of Human–Computer Interaction, 31(1), 2015, 1–16.[51] ´A. Csap´o and G. Wers´enyi, Overview of auditory representa-tions in human-machine interfaces, ACM Computing Surveys,46(2), 2013, 1–23.[52] I. Hussain, L. Chen, H.T. Mirza, G. Chen, and S.-U. Hassan,Right mix of speech and non-speech: hybrid auditory feedbackin mobility assistance of the visually impaired, UniversalAccess in the Information Society, 14(4), 2015, 527–536.[53] I. Hussain, L. Chen, H.T. Mirza, L. Wang, G. Chen, andI. Memon, Chinese-based spearcons: improving pedestriannavigation performance in eyes-free environment, Interna-tional Journal of Human–Computer Interaction, 32(6), 2016,460–469.[54] I. Hussain, L. Chen, H.T. Mirza, K. Xing, and G. Chen,A comparative study of sonification methods to representdistance and forward-direction in pedestrian navigation, In-ternational Journal of Human–Computer Interaction, 30(9),2014, 740–751.[55] E.L. Horton, R. Renganathan, B.N. Toth, et al., A review ofprinciples in design and usability testing of tactile technologyfor individuals with visual impairments, Assistive Technology,29(1), 2017, 28–36.[56] Y. Zeng, D. Li, and G. Zhai, Indoor localization systemfor individuals with visual impairment, in J. Zhou (ed.),International forum of digital TV & wireless multimediacommunication (Shanghai: Springer, 2018), 478–491.[57] M.A. Heller, M. Mccarthy, and A. Clark, pattern percep-tion and pictures for the blind, Psicol´ogica, 26(1), 2005,161–171.[58] V. Occelli, S. Lacey, C. Stephens, T. John, and K. Sathian,Haptic object recognition is view-independent in early blindbut not sighted people, Perception, 45(3), 2016, 337–45.[59] D. Picard, J.-M. Albaret, and A. Mazella, Haptic identificationof raised-line drawings by children, adolescents and youngadults: An age-related skill, Haptics-e, 5(2), 2013, 24–8.[60] C. Carpio, M. Am´erigo, and M. Dur´an, Study of an inclusiveintervention programme in pictorial perception with blindand sighted students, European Journal of Special NeedsEducation, 32(4), 2017, 525–542.[61] I. Puspitawati, A. Jebrane, and A. Vinter, Local and globalprocessing in blind and sighted children in a naming anddrawing task, Child Development, 85(3), 2014, 1077–1090.[62] B. And`o, C. Lombardo, and V. Marletta, Smart homecaretechnologies for the visually impaired: Recent advances,Smart Homecare Technology and TeleHealth, 3, 2015, 9–16.[63] P. Bach-y-Rita, C.C. Collins, F.A. Saunders, B. White, andL. Scadden, Vision substitution by tactile image projection,Nature, 221(5184), 1969, 963–964.[64] D.-R. Chebat, F.C. Schneider, R. Kupers, and M. Ptito,Navigation with a sensory substitution device in congenitallyblind individuals, Neuroreport, 22(7), 2011, 342–347.[65] P.B. Meijer, An experimental system for auditory image rep-resentations, IEEE Transactions on Biomedical Engineering,39(2), 1992, 112–121.[66] M.J. Proulx, P. Stoerig, E. Ludowig, and I. Knoll, Seeing‘where’ through the ears: effects of learning-by-doing andlong-term sensory deprivation on localization based on image-to-sound substitution, PLoS One, 3(3), 2008, e1840.[67] D.J. Brown and M.J. Proulx, Audio–vision substitution forblind individuals: Addressing human information processingcapacity limitations, IEEE Journal of Selected Topics inSignal Processing, 10(5), 2016, 924–931.[68] A. Amedi, W.M. Stern, J.A. Camprodon, et al., Shapeconveyed by visual-to-auditory sensory substitution activatesthe lateral occipital complex, Nature Neuroscience, 10(6),2007, 687.[69] L.F. Cuturi, E. Aggius-Vella, C. Campus, A. Parmiggiani, andM. Gori, From science to technology: Orientation and mobilityin blind children and adults, Neuroscience & BiobehavioralReviews, 71(Supplement C), 2016, 240–251.[70] D. Bolgiano and E. Meeks, A laser cane for the blind, IEEEJournal of Quantum Electronics, 3(6), 2003, 268–268.[71] P. Vera, D. Zenteno, and J. Salas, A smartphone-based virtualwhite cane, Pattern Analysis and Applications, 17(3), 2014,623–632.[72] Q.K. Dang, Y. Chee, D.D. Pham, and Y.S. Suh, A virtualblind cane using a line laser-based vision system and aninertial measurement unit, Sensors, 16(1), 2016, 1–18.[73] A. Majeed and S. Baadel, Facial recognition cane for thevisually impaired (Springer International Publishing, 2017),394–405.[74] C. Ye, S. Hong, X. Qian, and W. Wu, Co-robotic cane:A new robotic navigation aid for the visually impaired,IEEE Systems, Man, and Cybernetics Magazine, 2(2), 2016,33–42.[75] K. Kumar, B. Champaty, K. Uvanesh, and R. Chachan, De-velopment of an ultrasonic cane as a navigation aid for theblind people, International Conference on Control, Instru-mentation, Communication and Computational Technologies,2014, 475–479.[76] S. Gupta, I. Sharma, A. Tiwari, and G. Chitranshi, Advancedguide cane for the visually impaired people, InternationalConference on Next Generation Computing Technologies,2016, 452–455.[77] H.R. Shah, D.B. Uchil, S.S. Rane, P. Shete, and B.E. Student,Smart stick for blind using arduino, ultrasonic sensor andandroid, International Journal of Engineering Science, 7(4),2017, 10929–10933.[78] S. Sharma, M. Gupta, A. Kumar, M. Tripathi, and M.S.Gaur, Multiple distance sensors based smart stick for visuallyimpaired people, Computing and Communication Workshopand Conference, 2017, 1–5.[79] Bay Advanced Technologies Ltd., 2016, Available from:http://www.ksonar.com/.[80] G. Buchs, N. Simon, S. Maidenbaum, and A. Amedi, Waist-up protection for blind individuals using the EyeCane as aprimary and secondary mobility aid, Restorative Neurologyand Neuroscience, 35(2), 2017, 225–235.595[81] A. Krishnan, G. Deepakraj, N. Nishanth, and K.M. Anand-kumar, Autonomous walking stick for the blind using echolo-cation and image processing, International Conference onContemporary Computing and Informatics, 2017, 13–16.[82] Y. Niitsu, T. Taniguchi, and K. Kawashima, Detection andnotification of dangerous obstacles and places for visuallyimpaired persons using a smart cane, Seventh InternationalConference on Mobile Computing and Ubiquitous Networking,2014, 68–69.[83] A.C. Scherlen, J.C. Dumas, B. Guedj, and A. Vignot, “Rec-ognizeCane : The new concept of a cane which recognizesthe most common objects and safety clues, InternationalConference of IEEE Engineering in Medicine and BiologySociety, 2007, 6356–6359.[84] L. Kim, S. Park, S. Lee, and S. Ha, An electronic traveler aidfor the blind using multiple range sensors, IEICE ElectronicsExpress, 6(11), 2009, 794–799.[85] I. Shim and J. Yoon, A robotic cane based on interactivetechnology, IECON, 2002, 2249–2254.[86] M.Y. Fan, J.T. Bao, and H.R. Tang, A guide cane systemfor assisting the blind in travelling in outdoor environments,Applied Mechanics & Materials, 631–632, 2014, 568–571.[87] H. Takizawa, S. Yamaguchi, M. Aoyagi, N. Ezaki, andS. Mizuno, Kinect cane: An assistive system for the vi-sually impaired based on the concept of object recognitionaid, Personal and Ubiquitous Computing, 19(5–6), 2015,955–965.[88] A.M. Kassim, T. Yasuno, H. Suzuki, H.I. Jaafar, and M.S.M.Aras, Indoor navigation system based on passive RFIDtransponder with digital compass for visually impaired peo-ple, International Journal of Advanced Computer Science &Applications, 7(2), 2016, 604–611.[89] S. Pisa, E. Pittella, and E. Piuzzi, Serial patch array antennafor an FMCW radar housed in a white cane, InternationalJournal of Antennas and Propagation, 2016, 2016, 1–10.[90] S.A.D. Silva and D. Dias, A sensor platform for the visuallyimpaired to walk straight avoiding obstacles, InternationalConference on Sensing Technology, 2016.[91] R. Satpute, M. Mansuri, D. Kulkarni, A. Sawant, Smartcane for visually impaired person by using arduino, Im-perial Journal of Interdisciplinary Research, 3(5), 2016,1104–1108.[92] J.-R. Rizzo, K. Conti, T. Thomas, T.E. Hudson, R. WallEmerson, and D.S. Kim, A new primary mobility tool for thevisually impaired: A white cane—adaptive mobility devicehybrid, Assistive Technology, 2017, 1-7[93] T. Sugimoto, S. Nakashima, and Y. Kitazono, Developmentof guiding walking support device for visually impaired peoplewith the GPS, in R. Lee (ed.), Applied computing and infor-mation technology (Cham: Springer International Publishing,2017), 77–89.[94] S. Wankhade, M. Bichukale, S. Desai, S. Kamthe, andA. Borate, Smart stick for blind people with live videofeed, International Research Journal of Engineering andTechnology, 4(3), 2017, 1774–1778.[95] D. De Alwis and Y.C. Samarawickrama, Low cost ultrasonicbased wide detection range smart walking stick for visuallyimpaired, International Journal of multidisciplinary Studies,3(2), 2016, 123–130.[96] M. Pinto, R.D. Stanley, S. Malagi, and M.K. AjithanjayaKumar, Smart cane for the visually impaired, AmericanJournal of Intelligent Systems, 7(3), 2017, 73–76.[97] G.Y. Jeong and K.H. Yu, Multi-section sensing and vibrotac-tile perception for walking guide of visually impaired person,Sensors, 16(7), 2016, 1–19.[98] M.S. Sadi, S. Mahmud, Md.M. Kamal, and A.I. Bayazid,Automated walk-in assistant for the blinds, InternationalConference on Electrical Engineering and Information &Communication Technology, 2014.[99] A.M. Kassim, T. Yasuno, H. Suzuki, et al., Conceptual designand implementation of electronic spectacle based obstacledetection for visually impaired persons, Journal of AdvancedMechanical Design Systems & Manufacturing, 10(7), 2016,1–12.[100] C. Yi and Y. Tian, Assistive text reading from natural scenefor blind persons, in G. Hua and X.-S. Hua (eds.), Mobilecloud visual media computing: From interaction to service(Cham: Springer International Publishing, 2015), 219–241.[101] E.A. Hassan and T.B. Tang. Smart glasses for the visuallyimpaired people, International Conference on ComputersHelping People with Special Needs, 2016, 579–582.[102] L. Everding, L. Walger, V.S. Ghaderi, and J. Conradt, A mo-bility device for the blind with improved vertical resolutionusing dynamic vision sensors, IEEE International Conferenceon E-Health Networking, Applications and Services, 2016,1–5.[103] S. Wang, X. Yang, and Y. Tian, Detecting signage anddoors for blind navigation and wayfinding, Network ModelingAnalysis in Health Informatics and Bioinformatics, 2(2),2013, 81–93.[104] S. Pundlik, H. Yi, R. Liu, E. Peli, and G. Luo, Magnifyingsmartphone screen using Google Glass for low-vision users,IEEE Transactions on Neural Systems and RehabilitationEngineering, 25(1), 2017, 52–61.[105] L.B. Neto, F. Grijalva, V.R.M.L. Maike, et al., A Kinect-basedwearable face recognition system to aid visually impairedusers, IEEE Transactions on Human-Machine Systems, 47(1),2017, 52–64.[106] C. Stoll, R. Palluel-Germain, V. Fristot, D. Pellerin,D. Alleysson, and C. Graff, Navigating from a depth imageconverted into sound, Applied Bionics and Biomechanics,2015, 2015, 1–9.[107] S.L. Hicks, I. Wilson, J.J. van Rheede, R.E. MacLaren,S.M. Downes, and C. Kennard, Improved mobility with depth-based residual vision glasses, Investigative Ophthalmology &Visual Science, 55(13), 2014, 2153–2153.[108] Y. Wu, C.P. Chen, L. Zhou, Y. Li, B. Yu, and H. Jin,Design of see-through near-eye display for presbyopia, OpticsExpress, 25(8), 2017, 8937–8949.[109] A. Berger, A. Vokalova, F. Maly, and P. Poulova, GoogleGlass used as assistive technology its utilization for blind andvisually impaired people, International Conference on MobileWeb and Information Systems. 2017. p. 70–82.[110] R. McNaney, J. Vines, D. Roggen, et al., Exploring theacceptability of Google Glass as an everyday assistive devicefor people with Parkinson’s, Proceedings of the 32nd AnnualACM Conference on Human Factors in Computing Systems,2014, 2551–2554.[111] J.-H. Jung, D. Aloni, Y. Yitzhaky, and E. Peli, Active confocalimaging for visual prostheses, Vision Research, 111 (Part B),2015, 182–196.[112] F. Lan, G. Zhai, and W. Lin, Lightweight smart glass systemwith audio aid for visually impaired people, TENCON 2015 –2015 IEEE Region 10 Conference, 2016.[113] Google Glass Inc. 2012, available from: https://x.company/glass/[114] eSight 3 Co. 2017, available from: https://www.esighteyewear.com/[115] OrCam Technologies Ltd. 2015, available from: https://www.orcam.com/[116] Enchroma Inc. 2013, available from: http://enchroma.com/shop/[117] Hangzhou KR-VISION Technology Co., Ltd. Intoer. 2017,available from: http://www.krvision.cn/[118] Wicab, Inc. BrainPort r V100. 2015, available from:https://www.wicab.com/[119] S. Wang, H. Pan, C. Zhang, and Y. Tian, RGB-D image-baseddetection of stairs, pedestrian crosswalks and traffic signs,Journal of Visual Communication and Image Representation,25(2), 2014, 263–272.[120] T.T. Satue and M.B.A. Miah, An obstacle detection in orderto reduce navigation difficulties for visually impaired peo-ple, International Journal of Computer Applications, 161(6),2017, 39–41.[121] V.C. Sekhar, S. Bora, M. Das, P.K. Manchi, S. Josephine,and R. Paily, Design and implementation of blind assistancesystem using real time stereo vision algorithms, Interna-tional Conference on VLSI Design and 2016 InternationalConference on Embedded Systems, 2016, 421–426.[122] A.S. Rao, J. Gubbi, M. Palaniswami, and E. Wong, Avision-based system to detect potholes and uneven surfaces596for assisting blind people, IEEE International Conference onCommunications, 2016, 1–6.[123] P. Gharani and H.A. Karimi, Context-aware obstacle detec-tion for navigation by visually impaired, Image and VisionComputing, 64 (Supplement C), 2017, 103–115.[124] S. Shoval, I. Ulrich, and J. Borenstein, NavBelt and theGuide-Cane [obstacle-avoidance systems for the blind andvisually impaired], Robotics & Automation Magazine IEEE,10(1), 2003, 9–20.[125] K.P. Reshma, Ultrasonic spectacles design and waist-beltfor blind navigation, International Journal of EngineeringResearch and Innovative Technology, 1(1), 2014, 19–22.[126] A. Wattal, A. Ojha, and M. Kumar, Obstacle detection forvisually impaired using raspberry Pi and ultrasonic sensors,National Conference on Product Design, 2016, 1–5.[127] B. Mocanu, R. Tapu, and T. Zaharia, When ultrasonic sensorsand computer vision join forces for efficient obstacle detectionand recognition, Sensors, 16(11), 2016, 1–23.[128] A.C. Pattanshetti, S.I.A. Bhat, and H.G. Choudhari, Ad-vanced bat hat for the visually impaired, Imperial Journal ofInterdisciplinary Research, 2(6), 2016, 879–884.[129] S. Bhatlawande, M. Mahadevappa, and J. Mukhopadhyay,Way-finding electronic bracelet for visually impaired people,Point-Of-Care Healthcare Technologies, 2013, 260–263.[130] R. Rangarajan and M.B. Benslija, Voice recognition roboticdog guides for visually impaired people, IOSR Journal ofElectronics and Communication Engineering, 9(2), 2014,133–139.[131] B.S. Lin, C.C. Lee, and P.Y. Chiang, Simple smartphone-based guiding system for visually impaired people, Sensors,17(6), 2017, 1–22.[132] J.-H. Lee, D. Kim, and B.-S. Shin, A wearable guidancesystem incorporating multiple sensors for visually impairedpersons, in J.J. Park, Y. Pan, C.-S. Kim, and Y. Yang (eds.),Future Information Technology: FutureTech 2014 (Berlin,Heidelberg: Springer Berlin Heidelberg, 2014), 541–548.[133] C.-G. Kim and B.-S. Song, Proposal of a simultaneous ultra-sound emission for efficient obstacle searching in autonomouswheelchairs, Biomedical Engineering Letters, 3(1), 2013,47–50.[134] W. Fang, G. Zhai, and X. Yang, A flash light system forindividuals with visual impairment based on TPVM, CloudComputing and Big Data (CCBD), 2016 7th InternationalConference on, 2016, 362–366.[135] S. Bhatlawande, A. Sunkari, M. Mahadevappa, et al., Elec-tronic bracelet and vision-enabled waist-belt for mobility ofvisually impaired people, Assistive Technology the OfficialJournal of RESNA, 26(4), 2014, 186–195.[136] S. Sivagami, V. Kushmitha, D. Dinesh, T. Amala, and M.Anu Priyam, The Navaid – A navigation system for visuallychallenged obstacle detection using ultrasonic sensors, Inter-national Journal of Advance Research, Ideas and Innovationsin Technology, 3(2), 2017, 1207–1211.[137] I.R. Altaha and J.M. Rhee. Blindness support using a 3Dsound system based on a proximity sensor, IEEE InternationalConference on Consumer Electronics, 2016, 51–54.[138] T. Froneman, D. van den Heever, and K. Dellimore, Devel-opment of a wearable support system to aid the visually im-paired in independent mobilization and navigation, The 39thAnnual International Conference of the IEEE on Engineeringin Medicine and Biology Society, 2017, 783–786.[139] M.L. Mekhalfi, F. Melgani, Y. Bazi, and N. Alajlan, Fastindoor scene description for blind people with multiresolutionrandom projections, Journal of Visual Communication andImage Representation, 44(Supplement C), 2017, 95–105.[140] T.F. Wu, P.S. Tsai, N.T. Hu, and J.Y. Chen, Intelligentwheeled mobile robots for blind navigation application, En-gineering Computations, 34(2), 2017, 214–238.[141] A. Spiers and A. Dollar, Design and evaluation of shape-changing haptic interfaces for pedestrian navigation assis-tance, IEEE Transactions on Haptics, 10(1), 2016, 17–28.[142] J.-R. Rizzo, Y. Pan, T. Hudson, E.K. Wong, and Y. Fang,Sensor fusion for ecologically valid obstacle identification:Building a comprehensive assistive technology platform for thevisually impaired, 7th International Conference on Modeling,Simulation, and Applied Optimization, 2017, 1–5.[143] M. Bedny, Evidence from blindness for a cognitively pluripo-tent cortex, Trends in Cognitive Sciences, 21(9), 2017, 637–648.[144] K. H¨otting and B. R¨oder, Auditory and auditory-tactileprocessing in congenitally blind humans, Hearing Research,258(1), 2009, 165–174.[145] J. Ni, Y. Chen., K. Wang, and S.X. Yang, An improvedvision-based SLAM approach inspired from animal spatialcognition, International Journal of Robotics and Automation,2019. DOI: 10.2316/J.2019.206-0116.
Important Links:
Go Back