Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
2012 | 22 | 3 | 629-645
Tytuł artykułu

Optimal estimator of hypothesis probability for data mining problems with small samples

Treść / Zawartość
Warianty tytułu
Języki publikacji
The paper presents a new (to the best of the authors' knowledge) estimator of probability called the "Epₕ√2 completeness estimator" along with a theoretical derivation of its optimality. The estimator is especially suitable for a small number of sample items, which is the feature of many real problems characterized by data insufficiency. The control parameter of the estimator is not assumed in an a priori, subjective way, but was determined on the basis of an optimization criterion (the least absolute errors).The estimator was compared with the universally used frequency estimator of probability and with Cestnik's m-estimator with respect to accuracy. The comparison was realized both theoretically and experimentally. The results show the superiority of the Epₕ√2 completeness estimator over the frequency estimator for the probability interval pₕ ∈ (0.1, 0.9). The frequency estimator is better for pₕ ∈ [0, 0.1] and pₕ ∈ [0.9, 1].
Opis fizyczny
  • Faculty of Computer Science, West Pomeranian University of Technology, Żołnierska 49, 71-210 Szczecin, Poland
  • Institute of Quantitative Methods, Maritime University of Szczecin, Wały Chrobrego 1-2, 70-500 Szczecin, Poland
  • Ben-Haim, Y. (2006). Info-gap Decision Theory, Elsevier, Oxford/Amsterdam.
  • Burdzy, K. (2009). The Search for Certainty. On the Clash of Science and Philosophy of Probability, World Scientific, Singapore.
  • Burdzy, K. (2011a). Blog on the book The Search for Certainty. On the Clash of Science and Philosophy of Probability,
  • Burdzy, K. (2011b). Philosophy of probability, Website,∼burdzy/philosophy/.
  • Carnap, R. (1952). Logical Foundations of Probability, University Press, Chicago, IL.
  • Cestnik, B. (1990). Estimating probabilities: A crucial task in machine learning, in L. Aiello (Ed.), ECAI'90, Pitman, London, pp. 147-149.
  • Cestnik, B. (1991). Estimating Probabilities in Machine Learning, Ph.D. thesis, Faculty of Computer and Information Science, University of Ljubljana, Ljubljana.
  • Chernoff, H. (1952). A measure of asymptotic efficiency for test of a hypothesis based on the sum of observations, Annals of Mathematical Statistics 23(4): 493-507.
  • Cichosz, P. (2000). Learning Systems, Wydawnictwa NaukowoTechniczne, Warsaw, (in Polish).
  • Cios, K. and Kurgan, L. (2001). SPECT heart data set, UCI Machine Learning Repository,
  • De Finetti, B. (1975). Theory of Probability: A Critical Introductory Treatment, Willey, London.
  • Dubois, D. and Prade, H. (1988). Possibility Theory, Plenum Press, New York/NY, London.
  • Furnkranz, J. and Flach, P.A. (2005). Roc'n'rule learning: Towards a better understanding of covering algorithms, Machine Learning 58(1): 39-77.
  • Hajek, A. (2010). Interpretations of probability, in E.N. Zalta, (Ed.), The Stanford Encyclopedia of Philosophy,
  • Khrennikov, A. (1999). Interpretations of Probability, Brill Academic Pub., Utrecht/Boston, MA.
  • Klirr, G.J. and Yuan, B. (1996). Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems. Selected Papers by Lotfi Zadeh, World Scientific, Singapore.
  • Laplace, P.S. (1814, English edition 1951). A Philosophical Essay on Probabilities, Dover Publication, New York/NY.
  • Larose, D.T. (2010). Discovering Statistics, W.H. Freeman and Company, New York, NY.
  • Piegat, A. (2011a). Uncertainty of probability, in K.T. Atanassov, M. Baczyński, J. Drewniak, J. Kacprzyk, M. Krawczak, E. Schmidt, M. Wygralak and S. Zadrożny (Eds.) Recent Advances in Fuzzy Sets, Intuitionistic Fuzzy Sets, Generalized Nets and Related Topics, Vol. I: Foundations, IBS PAN, Warsaw, pp. 159-173.
  • Piegat, A. (2011b). Basic lecture on completeness interpretation of probability, Website, view/47-publikacje.
  • Polkowski, L. (2002). Rough Sets, Physica-Verlag, Heidelberg/New York, NY.
  • Popper, K.R. (1957). The propensity interpretation of the calculus of probability and the quantum theory, in S. Korner (Ed.), Observation and Interpretation: A Symposium of Philosophers and Physicists, Butterworth Scientific Publications, London, pp. 65-70.
  • Rocchi, P. (2003). The Structural Theory of Probability: New Ideas from Computer Science on the Ancient Problem of Probability Interpretation, Kluwer Academic/Plenum Publishers, New York, NY.
  • Rokach, L. and Maimon, O. (2008). Data Mining with Decision Trees: Theory and Applications, Machine Perception and Artificial Intelligence, Vol. 69, World Scientific Publishing, Singapore.
  • Shafer, G. (1976). A Mathematical Theory of Evidence, Princetown University Press, Princetown, NJ .
  • Siegler, R.S. (1976). Three aspects of cognitive development, Cognitive Psychology 8(4): 481-520.
  • Siegler, R.S. (1994). Balance scale weight & distance database, UCI Machine Learning Repository,
  • Sulzmann, J.N. and Furnkranz, J. (2009). An empirical comparison of probability estimation techniques for probabilistic rules, in J. Gama, J. Santos Costa, A.M. Jorge and P. Brazdil (Eds.), Proceedings of the 12th International Conference on Discovery Science (DS-09), Springer-Verlag, Heidelberg/New York, NY, pp. 317-331.
  • Sulzmann, J.N. and Furnkranz, J. (2010). Probability estimation and aggregation for rule learning, Technical Report TUDKE-201-03, Knowledge Engineering Group, TU Darmstadt, Darmstadt.
  • von Mises, R. (1957). Probability, Statistics and the Truth, Macmillan, Dover/New York, NY.
  • Witten, I.H. and Frank, E. (2005). Data Mining, Elsevier, Amsterdam.
  • Zadeh, L.A. (1965). Fuzzy sets, Information and Control 8(3): 338-353.
  • Ziarko, W. (1999). Decision making with probabilistic decision tables, in N. Zhong (Ed.), New Directions in Rough Sets, Data Mining, and Granular-Soft Computing, Proceedings of the 7th International Workshop, RSFDGrC99, Yamaguchi, Japan, Springer-Verlag, Berlin/Heidelberg, New York, NY, pp. 463-471.
Typ dokumentu
Identyfikator YADDA
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.