Correlation-based feature selection strategy in classification problems
In classification problems, the issue of high dimensionality, of data is often considered important. To lower data dimensionality, feature selection methods are often employed. To select a set of features that will span a representation space that is as good as possible for the classification task, one must take into consideration possible interdependencies between the features. As a trade-off between the complexity of the selection process and the quality of the selected feature set, a pairwise selection strategy has been recently suggested. In this paper, a modified pairwise selection strategy is proposed. Our research suggests that computation time can be significantly lowered while maintaining the quality of the selected feature sets by using mixed univariate and bivariate feature evaluation based on the correlation between the features. This paper presents the comparison of the performance of our method with that of the unmodified pairwise selection strategy based on several well-known benchmark sets. Experimental results show that, in most cases, it is possible to lower computation time and that with high statistical significance the quality of the selected feature sets is not lower compared with those selected using the unmodified pairwise selection process.
- Blake C. and Merz C.(2006): UCI Repository of Machine Learning Databases. - Available at: http://www.ics.uci.edu/~mlearn/MLRepository.html.
- Cover T.M. and van Campenhout J.M. (1977): On the possible ordering in the measurement selection problem. - IEEE Trans. Syst. Man Cybern., SMC-07(9), pp. 657-661.
- Das S. (2001): Filters, wrappers and a boosting-based hybrid for featureselection. - Int. Conf. Machine Learning, San Francisco, Ca, USA, pp. 74-81.
- Duda R., Hart P. and Stork D. (2001): Pattern Classification. - New York: Wiley.
- Kittler J. (1978): Pattern Recognition and Signal Processing. - The Netherlands: Sijhoff and Noordhoff, pp. 4160.
- Kohavi R. and John G.H. (1997): Wrappers for feature subset selection. - Artif. Intell., Vol. 97, Nos. 1-2, pp. 273-324.
- Kwaśnicka H. and Orski P. (2004): Genetic algorithm as an attribute selection tool for learning algorithms, Intelligent Information Systems 2004, New Trends in Intelligent Information Processing and Web Mining, Proc. Int. IIS: IIP WM04 Conf. - Berlin: Springer, pp. 449-453.
- Pekalska E., Harol A., Lai C. and Duin R.P.W. (2005): Pairwise selectionof features and prototypes, In: Computer Recognition Systems (Kurzyński M., Puchała E., Woźniak M.,Zolnierek, Eds.). -Proc. 4-th Int. Conf. Computer Recognition Systems, CORES'05, Advances in Soft Computing, Berlin: Springer, pp. 271-278.
- Pudil P., Novovicova J. and Kittler J. (1994): Floating search methods in feature selection. - Pattern Recogn. Lett., Vol. 15, No. 11, pp. 1119-1125.
- Xing E., Jordan M. and Karp R. (2001): Feature selection for high-dimensional genomic microarray data. - Proc. Int. Conf. Machine Learning,San Francisco, CA, USA, pp. 601-608.