Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl

PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
2011 | 21 | 4 | 717-731

Tytuł artykułu

Evolving small-board Go players using coevolutionary temporal difference learning with archives

Treść / Zawartość

Warianty tytułu

Języki publikacji

EN

Abstrakty

EN
We apply Coevolutionary Temporal Difference Learning (CTDL) to learn small-board Go strategies represented as weighted piece counters. CTDL is a randomized learning technique which interweaves two search processes that operate in the intra-game and inter-game mode. Intra-game learning is driven by gradient-descent Temporal Difference Learning (TDL), a reinforcement learning method that updates the board evaluation function according to differences observed between its values for consecutively visited game states. For the inter-game learning component, we provide a coevolutionary algorithm that maintains a sample of strategies and uses the outcomes of games played between them to iteratively modify the probability distribution, according to which new strategies are generated and added to the sample. We analyze CTDL's sensitivity to all important parameters, including the trace decay constant that controls the lookahead horizon of TDL, and the relative intensity of intra-game and inter-game learning. We also investigate how the presence of memory (an archive) affects the search performance, and find out that the archived approach is superior to other techniques considered here and produces strategies that outperform a handcrafted weighted piece counter strategy and simple liberty-based heuristics. This encouraging result can be potentially generalized not only to other strategy representations used for small-board Go, but also to various games and a broader class of problems, because CTDL is generic and does not rely on any problem-specific knowledge.

Rocznik

Tom

21

Numer

4

Strony

717-731

Opis fizyczny

Daty

wydano
2011
otrzymano
2010-09-28
poprawiono
2011-01-18

Twórcy

  • Institute of Computing Science, Poznań University of Technology, ul. Piotrowo 2, 60-965 Poznań, Poland
  • Institute of Computing Science, Poznań University of Technology, ul. Piotrowo 2, 60-965 Poznań, Poland
  • Institute of Computing Science, Poznań University of Technology, ul. Piotrowo 2, 60-965 Poznań, Poland

Bibliografia

  • Angeline, P.J. and Pollack, J.B. (1993). Competitive environments evolve better solutions for complex tasks, Proceedings of the 5th International Conference on Genetic Algorithms, Urbana-Champaign, IL, USA, Vol. 270, pp. 264-270.
  • Azaria, Y. and Sipper, M. (2005). GP-Gammon: Genetically programming backgammon players, Genetic Programming and Evolvable Machines 6(3): 283-300.
  • Bouzy, B. and Cazenave, T. (2001). Computer Go: An AI oriented survey, Artificial Intelligence 132(1): 39-103.
  • Bozulich, R. (1992). The Go Player's Almanac, Ishi Press, Tokyo.
  • Bucci, A. (2007). Emergent Geometric Organization and Informative Dimensions in Coevolutionary Algorithms, Ph.D. thesis, Brandeis University, Waltham, MA.
  • Caverlee, J.B. (2000). A genetic algorithm approach to discovering an optimal blackjack strategy, Genetic Algorithms and Genetic Programming at Stanford, Stanford Bookstore, Stanford, CA, pp. 70-79.
  • de Jong, E.D. (2005). The MaxSolve algorithm for coevolution, Proceedings of the 2005 Conference on Genetic and Evolutionary Computation, GECCO 2005, Washington, DC, USA, pp. 483-489.
  • de Jong, E.D. (2007). A monotonic archive for paretocoevolution, Evolutionary Computation 15(1): 61-93.
  • Ficici, S.G. (2004). Solution Concepts in Coevolutionary Algorithms, Ph.D. thesis, Brandeis University, Waltham, MA.
  • Ficici, S. and Pollack, J. (2003). A game-theoretic memory mechanism for coevolution, Proceedings of the 2003 International Conference on Genetic and Evolutionary Computation, GECCO'03, Chicago, IL, USA, pp. 286-297.
  • Fogel, D.B. (2002). Blondie24: Playing at the Edge of AI, Morgan Kaufmann Publishers, San Francisco, CA.
  • Hauptman, A. and Sipper, M. (2007). Evolution of an efficient search algorithm for the mate-in-n problem in chess, Proceedings of the 10th European Conference on Genetic Programming, EuroGP'07, Valencia, Spain, pp. 78-89.
  • Jaśkowski, W., Krawiec, K. and Wieloch, B. (2008a). Evolving strategy for a probabilistic game of imperfect information using genetic programming, Genetic Programming and Evolvable Machines 9(4): 281-294.
  • Jaśkowski, W., Krawiec, K. and Wieloch, B. (2008b). Winning Ant Wars: Evolving a human-competitive game strategy using fitnessless selection, 11th European Conference on Genetic Programming, EuroGP 2008, Naples, Italy, pp. 13-24.
  • Johnson, G. (1997). To test a powerful computer, play an ancient game, The New York Times, July 29.
  • Kim, K.-J., Choi, H. and Cho, S.-B. (2007). Hybrid of evolution and reinforcement learning for Othello players, IEEE Symposium on Computational Intelligence and Games, CIG 2007, Honolulu, HI, USA, pp. 203-209.
  • Krawiec, K. and Szubert, M. (2010). Coevolutionary temporal difference learning for small-board Go, IEEE Congress on Evolutionary Computation, Barcelona, Spain, pp. 1-8.
  • Lasker, E. (1960). Go and Go-Moku: The Oriental Board Games, Dover Publications, New York, NY.
  • Lubberts, A. and Miikkulainen, R. (2001). Co-evolving a Goplaying neural network, Coevolution: Turning Adaptive Algorithms Upon Themselves, Birds-of-a-Feather Workshop, Genetic and Evolutionary Computation Conference, GECCO 2001, San Francisco, CA, USA, pp. 14-19.
  • Lucas, S.M. and Runarsson, T.P. (2006). Temporal difference learning versus co-evolution for acquiring Othello position evaluation, IEEE Symposium on Computational Intelligence and Games, CIG 2006, Reno/Lake Tahoe, NV, USA, pp. 52-59.
  • Luke, S. (1998). Genetic programming produced competitive soccer softbot teams for RoboCup97, Genetic Programming 1998: Proceedings of the 3rd Annual Conference, Madison, WI, USA, pp. 214-222.
  • Luke, S. (2010). ECJ 20-A Java-based Evolutionary Computation Research System, http://cs.gmu.edu/˜eclab/projects/ecj/.
  • Luke, S. and Wiegand, R. (2002). When coevolutionary algorithms exhibit evolutionary dynamics, Workshop on Understanding Coevolution: Theory and Analysis of Coevolutionary Algorithms (at GECCO 2002), New York, NY, USA, pp. 236-241.
  • Mayer, H.A. (2007). Board representations for neural Go players learning by temporal difference, IEEE Symposium on Computational Intelligence and Games, CIG 2007, Honolulu, HI, USA, pp. 183-188.
  • Mechner, D.A. (1998). All systems Go, The Sciences 38(1): 32-37.
  • Michalewicz, Z. (1996). Genetic Algorithms + Data Structures = Evolution Programs, Springer-Verlag, London.
  • Miconi, T. (2009). Why coevolution doesn't ”work”: Superiority and progress in coevolution, Proceedings of the 12th European Conference on Genetic Programming, EuroGP'09, Tübingen, Germany, pp. 49-60.
  • Monroy, G.A., Stanley, K.O. and Miikkulainen, R. (2006). Coevolution of neural networks using a layered Pareto archive, Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, GECCO 2006, Seattle, WA, USA, pp. 329-336.
  • Müller, M. (2009). Fuego at the Computer Olympiad in Pamplona 2009: A tournament report, Technical report, University of Alberta, Alberta.
  • Pollack, J.B. and Blair, A.D. (1998). Co-evolution in the successful learning of backgammon strategy, Machine Learning 32(3): 225-240.
  • Rosin, C.D. and Belew, R.K. (1997). New methods for competitive coevolution, Evolutionary Computation 5(1): 1-29.
  • Runarsson, T. P. and Lucas, S. (2005). Coevolution versus selfplay temporal difference learning for acquiring position evaluation in small-board Go, IEEE Transactions on Evolutionary Computation 9(6): 628-640.
  • Samuel, A.L. (1959). Some studies in machine learning using the game of checkers, IBM Journal of Research and Development 3(3): 210-229.
  • Schraudolph, N.N., Dayan, P. and Sejnowski, T.J. (2001). Learning to evaluate Go positions via temporal difference methods, in N. Baba and L.C. Jain (Eds.) Computational Intelligence in Games, Studies in Fuzziness and Soft Computing, Vol. 62, Springer-Verlag, Berlin, Chapter 4, pp. 77-98.
  • Silver, D., Sutton, R. and Müller, M. (2007). Reinforcement learning of local shape in the game of Go, Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, pp. 1053-1058.
  • Singer, J.A. (2001). Co-evolving a neural-net evaluation function for Othello by combining genetic algorithms and reinforcement learning, International Conference on Computational Science, San Francisco, CA, USA, pp. 377-389.
  • Stanley, K., Bryant, B. and Miikkulainen, R. (2005). Real-time neuroevolution in the NERO video game, IEEE Transactions on Evolutionary Computation 9(6): 653-668.
  • Sutton, R.S. (1988). Learning to predict by the methods of temporal differences, Machine Learning 3(1): 9-44.
  • Sutton, R.S. and Barto, A.G. (1998). Reinforcement Learning: An Introduction, The MIT Press, Cambridge, MA.
  • Szubert, M. (2010). cECJ-Coevolutionary Computation in Java, http://www.cs.put.poznan.pl/mszubert/projects/cecj.html.
  • Szubert, M., Jaśkowski, W. and Krawiec, K. (2009). Coevolutionary temporal difference learning for Othello, IEEE Symposium on Computational Intelligence and Games, CIG 2009, Milan, Italy, pp. 104-111.
  • Tesauro, G. (1995). Temporal difference learning and TDGammon, Communications of the ACM 38(3): 58-68.
  • Watson, R.A. and Pollack, J.B. (2001). Coevolutionary dynamics in a minimal substrate, Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2001, San Francisco, CA, USA, pp. 702-709.

Typ dokumentu

Bibliografia

Identyfikatory

Identyfikator YADDA

bwmeta1.element.bwnjournal-article-amcv21i4p717bwm
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.