The currently dominant speech recognition technology, hidden Mar-kov modeling, has long been criticized for its simplistic assumptions about speech, and especially for the naive Bayes combination rule inherent in it. Many sophisticated alternative models have been suggested over the last decade. These, however, have demonstrated only modest improvements and brought no paradigm shift in technology. The goal of this paper is to examine why HMM performs so well in spite of its incorrect bias due to the naive Bayes assumption. To do this we create an algorithmic framework that allows us to experiment with alternative combination schemes and helps us understand the factors that influence recognition performance. From the findings we argue that the bias peculiar to the naive Bayes rule is not really detrimental to phoneme classification performance. Furthermore, it ensures consistent behavior in outlier modeling, allowing efficient management of insertion and deletion errors.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
The paper deals with asymptotics for a class of arithmetic functions which describe the value distribution of the greatest-common-divisor function. Typically, they are generated by a Dirichlet series whose analytic behavior is determined by the factor ζ2(s)ζ(2s − 1). Furthermore, multivariate generalizations are considered.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.