Prediction-Driven Correlation of Audio with Generic Music Templates

F. Seifert (Germany)

Keywords

music audio identification, multimedia database indexing

Abstract

Nowadays, identification of music is accomplished by data-driven approaches. Yet, human music perception works top-down. Consequently, all existing approaches become very inflexible since the do not simulate the way of human audio and music processing. Additionally, besides audio fingerprinting and segment discovering techniques, no essential strategies for a content-based identification of music audio exist. Since fingerprinting techniques rely on statistical information and do not consider any semantics of music, they require each piece of music to be pre-recorded and thus pre-processed for identification. In contrast, we attempt to combine bottom up and top-down strategies by applying the lead sheet model a generic model for processing tonal music to content-based audio identification and show, how it can be altered to handle audio. Finally, we will be capable of correlating several "similar" audio instances with only one given template, despite structural music variations or a broad spectral bandwidth.

Important Links:



Go Back