Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl

PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
2008 | 18 | 3 | 409-421

Tytuł artykułu

Emotion learning: Solving a shortest path problem in an arbitrary deterministic environment in linear time with an emotional agent

Treść / Zawartość

Warianty tytułu

Języki publikacji

EN

Abstrakty

EN
The paper presents an algorithm which solves the shortest path problem in an arbitrary deterministic environment with n states with an emotional agent in linear time. The algorithm originates from an algorithm which in exponential time solves the same problem, and the agent architecture used for solving the problem is an NN-CAA architecture (neural network crossbar adaptive array). By implementing emotion learning, the linear time algorithm is obtained and the agent architecture is modified. The complexity of the algorithm without operations for initiation in general does not depend on the number of states n, but only on the length of the shortest path. Depending on the position of the goal state, the complexity can be at most O(n). It can be concluded that the choice of the function which evaluates the emotional state of the agent plays a decisive role in solving the problem efficiently. That function should give as detailed information as possible about the consequences of the agent's actions, starting even from the initial state. In this way the function implements properties of human emotions.

Rocznik

Tom

18

Numer

3

Strony

409-421

Opis fizyczny

Daty

wydano
2008
otrzymano
2007-02-26
poprawiono
2008-02-10
poprawiono
2008-07-16

Twórcy

  • Mathematics Department, Faculty of Civil Engineering, St. Cyril and Methodius University, Partizanski odredi 24, P.O. Box 560, 1000 Skopje, Macedonia

Bibliografia

  • Botelho L.M and Coelho H. (1998). Adaptive agents: Emotion learning, Association for the Advancement of Artificial Intelligence, pp. 19-24.
  • Bozinovski S. (1995). Consequence Driven Systems, Teaching, Learning and Self-Learning Agent, Gocmar Press, Bitola.
  • Bozinovski S. (1982). A self learning system using secondary reinforcement, in: E. Trappl (Ed.) Cybernetics and Systems Research, North-Holland Publishing Company, pp. 397-402.
  • Bozinovski S. and Schoell P. (1999a). Emotions and hormones in learning. GMD Forschungszentrum Informationstechnik GmbH, Sankt Augustin.
  • Bozinovski S. (1999b). Crossbar Adaptive Array: The first connectionist network that solved the delayed reinforcement learning problem, in: A. Dobnikar, N. Steele, D. Pearson, R. Albrecht (Eds.), Artificial Neural Nets and Genetic Algorithms, Springer, pp. 320-325.
  • Bozinovski S. (2002). Motivation and emotion in anticipatory behaviour of consequence driven systems, Proceedings of the Workshop on Adaptive Behaviour in Anticipatory Learning Systems, Edinburgh, Scotland, pp. 100-119.
  • Bozinovski S. (2003). Anticipation driven artifical personality: Building on Lewin and Loehlin, in: M. Butz, O. Sigaud, P. Gerard (Eds.) Anticipatory Behaviour in Adaptive Learning Systems, LNAI 2684, Springer-Verlag, Berlin/Heilderberg, pp. 133-150.
  • Glaser J. (1963). General Psychopathology, Narodne Novine, Zagreb, (in Croatian).
  • Jorgen B.J and Gutin G. (2001). Digraphs, Theory, Algorithms and Applications, Springer-Verlag, London.
  • Koenig S. and Simmons R. (1992). Complexity Analysis of RealTime Reinforcement Learning Applied to Finding Shortest Paths in Deterministic Domains, Carnegie Mellon University, Pittsburgh.
  • Peng J. and Williams R. (1993). Efficient learning and planning with the Dyna fmework. Proceedings of the 2nd International Conference on Simulation of Adaptive Behaviour: From Animals to Animates, Hawaii, pp. 437-454
  • Petruseva S. and Bozinovski S. (2000). Consequence programming: Algorithm “at subgoal go back”. Mathematics Bulletin, Book 24 (L), pp. 141-152.
  • Petruseva S. (2006a). Comparison of the efficiency of two algorithms which solve the shortest path problem with an emotional agent, Yugoslav Journal of Operations Research, 16(2): 211-226
  • Petruseva S. (2006b). Consequence programming: Solving a shortest path problem in polynomial time using emotional learning, International Journal of Pure and Applied Mathematics, 29(4): 491-520.
  • Sutton R. and Barto A. (1998). Reinforcement Learning: An Introduction, MIT Press, Cambridge, MA.
  • Whitehead S. (1991). A complexity analysis of cooperative mechanisms in reinforcement learning, Proceedings of AAAI, pp. 607-613.
  • Whitehead S. (1992). Reinforcement learning for the adaptive control of perception and action, Ph.D. thesis, University of Rochester.
  • Wittek G. (1995). Me, Me, Me, the Spider in the Web. The Law of Correspondence, and the Law of Projection, Verlag DAS WORT, GmbH, Marktheidenfeld-Altfeld.

Typ dokumentu

Bibliografia

Identyfikatory

Identyfikator YADDA

bwmeta1.element.bwnjournal-article-amcv18i3p409bwm
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.