Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
2005 | 15 | 2 | 287-294
Tytuł artykułu

On naive Bayes in speech recognition

Treść / Zawartość
Warianty tytułu
Języki publikacji
The currently dominant speech recognition technology, hidden Mar-kov modeling, has long been criticized for its simplistic assumptions about speech, and especially for the naive Bayes combination rule inherent in it. Many sophisticated alternative models have been suggested over the last decade. These, however, have demonstrated only modest improvements and brought no paradigm shift in technology. The goal of this paper is to examine why HMM performs so well in spite of its incorrect bias due to the naive Bayes assumption. To do this we create an algorithmic framework that allows us to experiment with alternative combination schemes and helps us understand the factors that influence recognition performance. From the findings we argue that the bias peculiar to the naive Bayes rule is not really detrimental to phoneme classification performance. Furthermore, it ensures consistent behavior in outlier modeling, allowing efficient management of insertion and deletion errors.
Opis fizyczny
  • Research Group on Artificial Intelligence, H-6720 Szeged, Aradi vertanuk tere 1., Hungary
  • Research Group on Artificial Intelligence, H-6720 Szeged, Aradi vertanuk tere 1., Hungary
  • Research Group on Artificial Intelligence, H-6720 Szeged, Aradi vertanuk tere 1., Hungary
  • Clarkson P. and Moreno P.J. (1999): On the use of support vector machines for phonetic classification. - Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing, Phoenix, AZ, pp. 585-588.
  • Domingos P. and Pazzani M. (1997): On the optimality of the simple Bayesian classifier under zero-oneloss. - Machine Learn., Vol. 29, No. 2-3, pp. 103-130.
  • Glass J.R. (1996): A probabilistic framework for feature-based speechrecognition. - Proc. 4-th Int. Conf. Spoken Language Processing,Philadelphia, PA, pp. 2277-2280.
  • Hand D.J. and Yu K. (2001): Idiot's Bayes-Not so stupid after all? - Int. Stat. Rev., Vol. 69, No. 3, pp. 385-398.
  • Holmes W.J. and Russel M.J. (1999): Probabilistic-trajectory Segmental HMMs, - Comput. Speech Lang., Vol. 13, No. 1, pp. 3-37.
  • Huang X.D., Acero A. and Hon H-W. (2001): Spoken Language Processing. - New York: Prentice Hall.
  • Lee K.-F. and Hon H.-W. (1989): Speaker-independent phone recognition using hidden Markov models. - IEEE Trans. Acoust. Speech Signal Process., Vol. 37, No. 11, pp. 1641-1648.
  • Ostendorf M., Digitalakis V. and Kimball O.A. (1996): From HMMs to segment models: A unified view of stochastic modeling for speech recognition. - IEEE Trans. Acoust. Speech Signal Process., Vol. 4, No. 5, pp. 360-378.
  • Tóth L., Kocsor A. and Kovacs K. (2000): A discriminative segmental speech model and its applicationto hungarian number recognition. - Proc. 3rd Workshop Text, Speech, Dialogue, Brno, Czech Republic,pp. 307-313.
  • Rish I., Hellerstein J. and Thathachar J. (2000): An analysis of data characteristics that affect naive Bayes performance. - IBM Technical Report RC1993.
  • Tax D.M.J., van Breukelen M., Duin R.P.W. and Kittler J. (2000): Combining multiple classifiers by averaging or by multiplying?. - Pattern Recogn., Vol. 33, No. 9, pp. 1475-1485.
  • Van Horn K.S. (2001): A maximum-entropy solution to the frame-dependency problem in speech recognition. - Tech. Rep., Dept. Computer Science, North Dakota State Univ.
  • Verhasselt J., Illina I., Martens J.-P., Gong Y., Haton J.-P.(1998): Assessing the importance of the segmentation probability in segment-based speechrecognition. - Speech Commun., Vol. 24, No. 1, pp. 51-72.
  • Woodland P.C. and Povey D. (2000): Large scale discriminative training for speech recognition. - Proc. ISCA ITRW ASR 2000, France: Paris, pp. 7-16.
  • Young S. et al. (2004): The HMM Toolkit (HTK) (software and manual). - Available at
Typ dokumentu
Identyfikator YADDA
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.