Multi-Biometrics Fusion (Heart Sound-Speech Authentication System)

Osama Al-hamdani, Ali Chekima, Jamal Dargham, Sh-Hussain Salleh, Alias Mohd Noor, and Fuad Noman

Keywords

Speaker Recognition, Picewise-linear, Fusion, Vector Quantization

Abstract

Biometrics recognition systems implemented in a real-world environment often have to be content with adverse biometrics signal acquisition which can vary greatly in this environment. This includes acoustic noise that can contaminate speech signals or artifacts that can alter heart sound signals. In order to overcome these recognition errors, re-searchers all over the world apply various methods such as normalization, feature extraction, classification to address this issue. Recently, combining biometrics modalities has proven to be an effective strategy to improve the perfor-mance of biometrics systems. The approach in this paper is based on biometrics recognition which used the heart sound signal as a feature that can’t be easily copied The Mel-Frequency Cepstral Coefficient (MFCC) is used as a feature vector and vector quantization (VQ) is used as the matching model algorithm. A simple yet highly reliable method is in-troduced for biometric applications. Experimental results show that the recognition rate of the Heart Sound Speaker identification (HS-SI) model is 81.9% while (S-SI) the rate for the Speech Speaker Independent model is 99.3% for a 21 client, 40 imposter database. Heart sound- speaker verification (HS-SV) provides an average EER of 17.8% while the average EER for the speech speaker verification model (S-SV) is 3.39%. In order to reach a higher security level an alternative to the above approach, which is based on multimodal and a fusion technique, is implemented into the system. The best performance of the work is based on simple-sum score fusion with a pricewise-linear normaliza-tion technique which provides an EER of 0.69%.

Important Links:



Go Back