Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl

PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
2001 | 11 | 3 | 565-582

Tytuł artykułu

Rough sets methods in feature reduction and classification

Treść / Zawartość

Warianty tytułu

Języki publikacji

EN

Abstrakty

EN
The paper presents an application of rough sets and statistical methods to feature reduction and pattern recognition. The presented description of rough sets theory emphasizes the role of rough sets reducts in feature selection and data reduction in pattern recognition. The overview of methods of feature selection emphasizes feature selection criteria, including rough set-based methods. The paper also contains a description of the algorithm for feature selection and reduction based on the rough sets method proposed jointly with Principal Component Analysis. Finally, the paper presents numerical results of face recognition experiments using the learning vector quantization neural network, with feature selection based on the proposed principal components analysis and rough sets methods.

Słowa kluczowe

Rocznik

Tom

11

Numer

3

Strony

565-582

Opis fizyczny

Daty

wydano
2001
otrzymano
2001-03-01
poprawiono
2001-06-01

Twórcy

  • San Diego State University, Department of Mathematical and Computer Sciences, 5500 Campanile Drive, San Diego, CA 92182, U.S.A.

Bibliografia

  • Almuallim H. and Dietterich T.G. (1991): Learning with many irrelevant features. — Proc. 9th Nat. Conf. Artificial Intelligence, Menlo Park, CA, AAAI Press, pp.574–552.
  • Atkeson C.G. (1991): Using locally weighted regression for robot learning. — Proc. IEEE Int. Conf. Robotics and Automation, pp.958–963
  • Bazan J., Skowron A. and Synak P. (1994a): Market data analysis: A rough set approach. — ICS Res. Rep., No.6, Warsaw University of Technology, Warsaw, Poland.
  • Bazan J., Skowron A. and Synak P. (1994b): Dynamic reducts as a tool for extracting laws from decision tables. — Proc. Symp. Methodologies for Intelligent Systems, Charlotte, NC, pp.16–19.
  • Bishop C.M. (1995): Neural Networks for Pattern Recognition. — Oxford: Oxford Press
  • Blumer A., Ehrenfeucht A., Haussler D. and Warmuth M.K. (1987): Occam’s razor. — Inf. Process. Lett., Vol.24, pp.377–380.
  • Diamentras K.I. and Kung S.Y. (1996): Principal Component Neural Networks. Theory and Applications. — New York: Wiley.
  • Cios K., Pedrycz W. and Świniarski R.W. (1998): Data Mining Methods in Knowledge DisHUK covery. — Boston/Dordrecht/London: Kluwer Academic Publishers.
  • Doak J. (1992): An evaluation of feature selection methods and their application to computer security. — Tech. Rep., No.CSE-92-18, University of California at Davis.
  • Duda R.O. and Hart P.E. (1973): Pattern Recognition and Scene Analysis. — New York: Wiley.
  • Fisher R.A. (1936): The use of multiple measurements in taxonomy problems. — Annals of Eugenics, Vol.7, pp.179–188.
  • Fukunaga K. (1990): Introduction to Statistical Pattern Recognition. — New York: Academic Press.
  • Geman S., Bienenstock E. and Doursat R. (1992): Neural networks and the bias/variance dilemma. — Neural Comput., Vol.4, No.1, pp.1–58.
  • Holland J.H. (1992): Adaptation of Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control and Artificial Intelligence. — MIT Press.
  • Hong Z.Q. (1991): Algebraic Feature Extraction of Image for Recognition. — Pattern RecogHUK nition, Vol.24, No.3, pp.211–219.
  • Jain A.K. (1989): Fundamentals of Digital Image Processing. — New York: Prentice Hall.
  • John G., Kohavi R. and Pfleger K. (1994): Irrelevant features and the subset selection probHUK lem. — Proc. 11th Int. Conf. Machine Learning (ICML-94), pp.121–129.
  • Karhunen K. (1947): Uber lineare methoden in der Wahrscheinlichkeitsrechnung. — Annales Acedemiae Scientiarum Fennicae, Series AI: Mathematica-Physica, 3rd Ed.: Van NosHUK trand, pp.373–379.
  • Kira K. and Rendell L.A. (1992): A practical approach to feature selection. — Proc. 9th Int. Workshop Machine Learning, Aberdeen, Scotland, pp.259–256.
  • Kittler J. (1986): Feature selection and extraction, In: Handbook of Pattern Recognition and Image Processing (T.Y. Young and K.S. Fu, Eds.), San Diego: Academic Press, pp.59–83.
  • Kohonen T. (1990): The Self-Organizing Map. — Proc. IEEE, Vol.78, pp.1464–1480.
  • Kononenko I. (1994): Estimating attributes: Analysis and extension of Relief. — Proc. Europ. Conf. Machine Learning.
  • Langley P. and Sage S. (1994): Selection of relevant features in machine learning. — Proc. AAAI Fall Symp. Relevance, pp.140–144.
  • Lewler E.L. and Wood D.E. (1966): Branch and bound methods: A survey. — Oper. Res., Vol.149, pp.4.
  • Liu H. and Setiono R. (1996a): A probabilistic approach to feature selection—A filter solution. — Proc. 13th Int. Conf. Machine Learning (ICML’96), Bari, Italy, pp.319–327.
  • Liu H. and Setiono R. (1996b): Feature selection and classification—A probabilistic wrapHUK per approach. — 9th Int. Conf. Industrial and Engineering Applications of Artificial Intelligence and Expert Systems (IEA-AIE’96), Fukuoka, Japan, pp.419–424.
  • Liu H. and Motoda H. (1999): Feature Selection for Knowledge Discovery and Data Mining. — Dordrecht: Kluwer Academic Publishers.
  • Lobo V., Moura-Pires F. and Świniarski R. (1997): Minimizing the number of neurons for a SOM-based classification, using Boolean function formalization. — Int. Rep., San Diego State University, Department of Mathematical and Computer Sciences.
  • Marill T. and Green D.M. (1963): On the effectiveness of receptors in recognition systems. — IEEE Trans. Inf. Theory, Vol.9, pp.11–17.
  • Modrzejewski M. (1993): Feature selection using rough sets theory. — Proc. European Conf. Machine Learning, pp.213–226.
  • Narendra P.M. and Fukunaga K. (1977): A branch and bound algorithm for feature subset selection. — Trans. IEEE. Computers, Vol.C-26, pp.917–922.
  • Nguyen T. et al. (1994): Application of rough sets, neural networks and maximum likelihood for texture classification based on singular value decomposition. — Proc. Int. Workshop RSSC Rough Sets and Soft Computing, San Jose, U.S.A., pp.332-339.
  • Pal S.K. and Skowron A. (1999): Rough-Fuzzy Hybridization: A New Trend in Decision Making. — Singapore: Springer.
  • Pawlak Z. (1982): Rough sets. — Int. J. Comp. Sci., Vol.11, pp.341–356.
  • Pawlak Z. (1991): Rough Sets. Theoretical Aspects of Reasoning About Data. — Boston: Kluwer Academic Publishers.
  • Pregenzer M. (1997): Distinction sensitive learning vector quantization. — Ph.D. Thesis, Graz University of Technology, Graz, Austria.
  • Quinlan J.R. (1993): C4.5: Programs for Machine Learning. — New York: Morgan Kaufman.
  • Rissanen J. (1978): Modeling by shortest data description. — Automatica, Vol.14, pp.465– 471.
  • Samaria F. and Harter A. (1994): Parametrization of stochastic model for human face idenHUK tification. — Proc. IEEE Workshop Application of Computer Vision.
  • Siedlecki W. and Sklanski J. (1988): On automatic feature selection. — Int. J. Pattern Recogn. Artif. Intell., Vol.2, No.2, pp.197–220.
  • Skowron A. (1990): The rough sets theory and evidence theory. — Fundamenta Informaticae, Vol.13, pp.245–262.
  • Swets D.L and Weng J.J. (1996): Using discriminant eigenfeatures for image retrieval. — IEEE Trans. Pattern Recogn. Mach. Intell., Vol.10, No.8, pp.831–836.
  • Świniarski R. (1993): Introduction to rough sets, In: Materials of the Int. Short Course Neural Networks. Fuzzy and Rough Systems. Theory and Applications. — San Diego State University, San Diego, California, pp.1–24.
  • Świniarski R. (1995): RoughFuzzyLab. — A software package developed at San Diego State University, San Diego, California.
  • Świniarski R. and Nguyen J. (1996): Rough sets expert system for texture classification based on 2D spectral features. — Proc. 3rd Biennial European Joint Conf. Engineering Systems Design and Analysis ESDA’96, Montpellier, France, pp.3–8.
  • Świniarski R., Hunt F., Chalret D. and Pearson D. (1995): Feature selection using rough sets and hidden layer expansion for rupture prediction in a highly automated production system. — Proc. 12th Int. Conf. Systems Science, Wrocław, Poland.
  • Świniarski R. and Hargis L. (2001): Rough sets as a front and of neural networks texture classifiers. — Neurocomputing, Vol.36, pp.85–102.
  • Swingler K. (1996): Applying Neural Networks. — London: Academic Press.
  • Weiss S. and Indurkhya N. (1977): Predictive Data-Mining: A Practical Guide. — New York: Morgan Kaufmann.
  • Yu B. and Yuan B. (1993): A more efficient branch and bound algorithm for feature selection. — Pattern Recognition, Vol.26, No.6, pp.883–889.
  • Xu L., Yan P. and Chang T. (1989): Best first strategy for feature selection. — Proc. 9th Int. Conf. Pattern Recognition, pp.706–708.

Typ dokumentu

Bibliografia

Identyfikatory

Identyfikator YADDA

bwmeta1.element.bwnjournal-article-amcv11i3p565bwm
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.