Generalising Power of a Learned Hierarchical Hidden Markov Model

O. Lamont and G. Mann (Australia)

Keywords

Hierarchical Hidden Markov Models, spoken dialoguesystems, context-free grammar induction

Abstract

In this work we explore how applying graph-modifying algorithms to a Hierarchical Hidden Markov Model can simplify the specification of input-output mapping rules in a spoken language dialogue system. A set of state chunking learning algorithms capable of inducing a stochastic context-free grammar from a small amount of question-and-answer training data have been created for use in the Speech Librarian, our test implementation. We quantitatively estimate the power of the system to induce a broad but accurate coverage of linguistic queries from a relatively small set of question-and-answer pairs, using subjective judgements of semantic relevance weighted by probability of occurrence.

Important Links:



Go Back


IASTED
Rotating Call For Paper Image