Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl

PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
2011 | 21 | 1 | 57-68

Tytuł artykułu

Application of agent-based simulated annealing and tabu search procedures to solving the data reduction problem

Treść / Zawartość

Warianty tytułu

Języki publikacji

EN

Abstrakty

EN
The problem considered concerns data reduction for machine learning. Data reduction aims at deciding which features and instances from the training set should be retained for further use during the learning process. Data reduction results in increased capabilities and generalization properties of the learning model and a shorter time of the learning process. It can also help in scaling up to large data sources. The paper proposes an agent-based data reduction approach with the learning process executed by a team of agents (A-Team). Several A-Team architectures with agents executing the simulated annealing and tabu search procedures are proposed and investigated. The paper includes a detailed description of the proposed approach and discusses the results of a validating experiment.

Rocznik

Tom

21

Numer

1

Strony

57-68

Opis fizyczny

Daty

wydano
2011
otrzymano
2010-04-13
poprawiono
2010-11-15

Twórcy

  • Department of Information Systems, Gdynia Maritime University, Morska 83, 81-225 Gdynia, Poland
  • Department of Information Systems, Gdynia Maritime University, Morska 83, 81-225 Gdynia, Poland

Bibliografia

  • Aarts, E. and van Laarhoven, P. (1987). Simulated anealing: A pedestrian review of the theory and some applications, in J. Kittler and P.A. Devijver (Eds.), Pattern Recognition and Applications, Springer-Verlag, Berlin.
  • Aha, D., Kibler, D. and Albert, M. (1999). Instance-based learning algorithms, Machine Learning 6(1): 37-66.
  • Aksela, M. (2007). Adaptive Combinations of Classifiers with Application to On-line Handwritten Character Recognition, Ph.D. thesis, Helsinki University of Technology, Helsinki.
  • Asuncion, A. and Newman, D. (2007). Website of the UCI Machine Learning Repository, http://archive.ics.uci.edu/ml/.
  • Barbucha, D., Czarnowski, I., Jędrzejowicz, P., Ratajczak-Ropel, E. and Wierzbowska, I. (2009). e-JABAT-An implementation of the Web-based A-Team, in N. Nguyen and L. Jain (Eds.), Intelligent Agents in the Evolution of Web and Applications, Studies in Computational Intelligence, Vol. 167, Springer-Verlag, Berlin/Heidelberg, pp. 57-86.
  • Bellifemine, F., Caire, G., Poggi A. and Rimassa, G. (2003). Jade. A white paper, Technical Report Exp, 3(3), September 2003, pp. 6-19, http://exp.telecomitalialab.com.
  • Bezdek, J. and Kuncheva, L. (2001). Nearest prototype classifier designs: An experimental study, International Journal of Intelligent Systems 16(2): 1445-1473.
  • Bhanu, B. and Peng, J. (2000). Adaptive integration image segmentation and object recognition, IEEE Transactions on Systems, Man and Cybernetics 30(4): 427-44.
  • Bull, L. (2004). Learning classifier systems: A brief introduction, in L. Bull (Ed.), Applications of Learning Classifier Systems, Studies in Fuzziness and Soft Computing, Springer, Berlin, pp. 3-14.
  • Czarnowski, I. (2010). Prototype selection algorithms for distributed learning, Pattern Recognition 43(6): 2292-2300.
  • Czarnowski, I. and Jędrzejowicz, P. (2004). An approach to instance reduction in supervised learning, in F. Coenen, A. Preece and A. Macintosh (Eds.), Research and Development in Intelligent Systems XX, Springer, London, pp. 267-282.
  • Czarnowski, I. and Jędrzejowicz, P. (2010). An approach to data reduction and integrated machine classification, New Generation Computing 28(1): 21-40.
  • Dash, M. and Liu, H. (1997). Feature selection for classification, Intelligence Data Analysis 1(3): 131-156.
  • DataSet, C. (2009). Website of the Datasets Used for Classification: Comparison of Results, http://www.is.umk.pl/projects/datasets.html.
  • Friedman, M. (1937). The use of ranks to avoid the assumption of normality implicit in the analysis of variance, Journal of the American Statistical Association 32(200): 675-701.
  • Glover, F. (1989). Tabu search, Part I, ORSA Journal on Computing 1(3): 533-549.
  • Glover, F. (1990). Tabu search, Part II, ORSA Journal on Computing 2(1): 4-32.
  • Glover, F. and Laguna, M. (1993). Tabu search, in C. Reeves (Ed.), Modern Heuristic Techniques for Combinatorial Problems, John Wiley and Sons, New Jork, NY, pp. 70-150 .
  • Hamo, Y. and Markovitch, S. (2005). The COMPSET algorithm for subset selection, Proceedings of the 19th International Joint Conference for Artificial Intelligence, Edinburgh, UK, pp. 728-733.
  • Hart, P. (1968). The condensed nearest neighbour rule, IEEE Transactions on Information Theory 14: 515-516.
  • Jędrzejowicz, P. (1999). Social learning algorithm as a tool for solving some difficult scheduling problems, Foundation of Computing and Decision Sciences 24(2): 51-66.
  • Kim, S.-W. and Oommen, B. (2003). A brief taxonomy and ranking of creative prototype reduction schemes, Pattern Analysis Application 6(3): 232-244.
  • Kirkpatrick, S., Gelatt, C. and Vecci, M. (1983). Optimisation by simulated annealing, Science 220(4598): 671-680.
  • Kohavi, R. and John, G. (1997). Wrappers for feature subset selection, Artificial Intelligence 97(1-2): 273-324.
  • Kuncheva, L. and Bezdek, J. (1998). Nearest prototype classification: Clustering, genetic algorithm or random search?, IEEE Transactions on Systems, Man and Cybernetics 28(1): 160-164.
  • Kuncheva, L. and Jain, L. (1999). Nearest-neighbor classifier: Simultaneous editing and feature selection, Pattern Recognition Letters 20(11-13): 1149-1156.
  • Michalewicz, Z. (1996). Genetic Algorithms + Data Structures = Evolution Programs, Springer, Berlin.
  • Nanni, L. and Lumini, A. (2009). Particle swarm optimization for prototype reduction, Neurocomputing 72(4-6): 1092-1097.
  • Quinlan, J. (1993). C4.5: Programs for Machine Learning, Morgan Kaufmann Publishers, San Mateo, CA.
  • Raman, B. (2003). Enhancing learning using feature and example selection, Technical report, Texas AM University, College Station, TX.
  • Ritter, G., Woodruff, H., Lowry, S. and Isenhour, T. (1975). An algorithm for a selective nearest decision rule, IEEE Transactions on Information Theory 21(6): 665-669.
  • Rozsypal, A. and Kubat, M. (2003). Selecting representative examples and attributes by a genetic algorithm, Intelligent Data Analysis 7(4): 291-304.
  • Sahel, Z., Bouchachia, A., Gabrys, B. and Rogers, P. (2007). Adaptive mechanisms for classification problems with drifting data, in B. Apolloni, R. Howlett and L. Jain (Eds.), KES 2007, Lecture Notes in Artificial Intelligence, Vol. 4693, Springer-Verlag, Berlin/Heidelberg, pp. 419-426.
  • Skalak, D. (1994). Prototype and feature selection by sampling and random mutation hill climbing algorithm, Proceedings of the International Conference on Machine Learning, New Brunswick, NJ, USA, pp. 293-301.
  • Song, H. and Lee, S. (1996). LVQ combined with simulated annealing for optimal design of large set reference models, Neural Networks 9(2): 329-336.
  • Talukdar, S., Baerentzen, L., Gove, A. and de Souza, P. (1996). Asynchronous teams: Co-operation schemes for autonomous, computer-based agents, Technical Report TR EDRC 18-59-96, Carnegie Mellon University, Pittsburgh, PA.
  • Wilson, D. and Martinez, T. (2000a). An integrated instancebased learning algorithm, Computational Intelligence 16(1): 1-28.
  • Wilson, D. and Martinez, T. (2000b). Reduction techniques for instance-based learning algorithm, Machine Learning 33(3): 257-286.
  • Zongker, D. and Jain, A. (1996). Algorithm for feature selection: An evaluation, Proceedings of the International Conference on Pattern Recognition, ICPR96, Vienna, Austria, pp. 18-22.

Typ dokumentu

Bibliografia

Identyfikatory

Identyfikator YADDA

bwmeta1.element.bwnjournal-article-amcv21i1p57bwm
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.