Functional Model to Replicate the Representations of Human Vowels in the Primary Auditory Cortex

Kenji Ozawa, Tetsuya Goda, Ling Qin, and Yu Sato


Biomedical modelling, Auditory functional model, Primary auditory cortex, Brain–machine interface


Previously we investigated representations of human vowels in the primary auditory cortex (A1) of awake cats [Qin, et al., J. Neurophysiol., 99, 2305–2319, 2008]. Herein we develop an auditory model with a multichannel neural pathway to replicate such representations in A1 neurons. The model consists of seven blocks: the basilar membrane (BM), inner hair cell (IHC), primary auditory nerve (AN), ventral cochlear nucleus (VCN), inferior colliculus (IC), medial geniculate body (MGB), and A1. Weighted summations across the channels are introduced into the VCN, IC, MGB, and A1 blocks because lateral inhibition is observed for these connections. We evaluated the constructed model by simulating the responses of A1 neurons to human vowels. The output from the model of five Japanese vowels successfully represented the formants of vowels. Because this model generates pulse trains similar to neural firings in A1 neurons, it can be used to convert acoustical signals into electrical stimuli as part of an auditory brain–machine interface for individuals with severe hearing impairment.

Important Links:

Go Back