Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl

PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
2006 | 16 | 3 | 387-397

Tytuł artykułu

Modeling shortest path games with Petri nets: a Lyapunov based theory

Autorzy

Treść / Zawartość

Warianty tytułu

Języki publikacji

EN

Abstrakty

EN
In this paper we introduce a new modeling paradigm for shortest path games representation with Petri nets. Whereas previous works have restricted attention to tracking the net using Bellman's equation as a utility function, this work uses a Lyapunov-like function. In this sense, we change the traditional cost function by a trajectory-tracking function which is also an optimal cost-to-target function. This makes a significant difference in the conceptualization of the problem domain, allowing the replacement of the Nash equilibrium point by the Lyapunov equilibrium point in game theory. We show that the Lyapunov equilibrium point coincides with the Nash equilibrium point. As a consequence, all properties of equilibrium and stability are preserved in game theory. This is the most important contribution of this work. The potential of this approach remains in its formal proof simplicity for the existence of an equilibrium point.

Rocznik

Tom

16

Numer

3

Strony

387-397

Opis fizyczny

Daty

wydano
2006
otrzymano
2005-11-29
poprawiono
2006-04-12
(nieznana)
2006-07-07

Twórcy

  • Center for Computing Research National Polytechnic Institute Av. Juan de Dios Batiz s/n, Edificio CIC, Col. Nueva Industrial Vallejo, 07738, Mexico City, Mexico

Bibliografia

  • Axelrod R. (1984): The Evolution of Cooperation. - NewYork: Basic Books.
  • Bellman R.E. (1957): Dynamic Programming. - Princeton: Princeton University Press.
  • Bertsekas D.P. and Shreve S.E. (1978): Stochastic Optimal Control: The Discrete Time Case. - New York: Academic Press.
  • Bertsekas D.P. (1987): Dynamic Programming: Deterministic and Stochastic Models. - Englewood Cliffs: Prentice-Hall.
  • Bertsekas D.P. and Tsitsiklis J.N. (1989): Parallel and Distributed Computation: Numerical Methods. - Englewood Cliffs: Prentice-Hall.
  • Bertsekas D.P. and Tsitsiklis J.N. (1991): An analysis of stochastic shortest path problems. - Math. Oper. Res., Vol. 16, No. 3,pp. 580-595.
  • Blackwell D. (1967): Positive dynamicprogramming. - Proc. 5th Berkeley Symp. Math., Statist., and Probability, Berkeley, California, Vol. 1, pp. 415-418.
  • Clempner J. (2005): Colored decision process petri nets: modeling, analysis and stability. - Int. J. Appl. Math. Comput. Sci., Vol. 15, No. 3, pp. 405-420.
  • Clempner J. (2006): Towards modeling the shortest path problem and games with petri nets. - Proc. Doctoral Consortium at 27-th Int. Conf. Application and Theory of Petri Nets and Other Models Of Concurrency, Turku, Finland, pp. 1-12.
  • Derman C. (1970): Finite State Markovian Decision Processes. - New York: Academic Press.
  • Dynkin E.B. (1963): The optimum choice of the instant forstopping a Markov process. - Soviet Math. Doklady, Vol. 150, pp. 238-240.
  • Eaton J.H. and Zadeh L.A. (1962): Optimal pursuit strategies indiscrete state probabilistic systems. - Trans. ASME Ser. D, J. Basic Eng., Vol. 84,pp. 23-29.
  • Grigelionis R.I. and Shiryaev A.N. (1966): On Stefan'sproblem and optimal stopping rules for Markov processes. - Theory Prob. Applic., Vol. 11, pp. 541-558.
  • Hernandez-Lerma O. and Lasserre J.B. (1996): Discrete-Time Markov Control Process: Basic Optimality Criteria. - Berlin: Springer.
  • Hernandez-Lerma O., Carrasco G. and Pere-Hernandez R. (1999): Markov Control Processes with the Expected Total Cost Criterion: Optimality. - Stability and Transient Model. Acta Applicadae Matematicae, Vol. 59,No. 3, pp. 229-269.
  • Hernandez-Lerma O. and Lasserre J.B. (1999): Futher Topicson Discrete-Time Markov Control Process. - Berlin: Springer.
  • Hinderer K. and Waldmann K.H (2003): The critical discount factor for finite Markovian decision process with an absorbing set.- Math. Meth. Oper. Res., Vol. 57, pp. 1-19.
  • Hinderer K. and Waldmann K.H. (2005): Algorithms for countable state Markov decision model with an absorbing set. - SIAM J. Contr.Optim., Vol. 43, pp. 2109-2131.
  • Howard R.A. (1960): Dynamic Programming and Markov Processes. - Cambridge, MA: MIT Press.
  • Kalman R.E. and Bertram J.E. (1960): Control system analysis and design via the 'Second Method' of Lyapunov. - J. Basic Eng.,ASME, Vol. 82(D), pp. 371-393.
  • Kuhn H.W. and Nasae S. (Eds.) (2002): The Essential John Nash. - Princeton: Princeton University Press.
  • Kumar P.R. and Shiau T.H. (1981): Zero sum dynamic games, In: Control and Dynamic Games, (C.T. Leondes, Ed.) - Academic Press, pp. 1345-1378.
  • Kushner H.J. and Chamberlain S.G. (1969): Finite state stochastic games: Existence theorems and computational procedures. - IEEE Trans. Automat. Contr., Vol. 14, No. 3.
  • Kushner H. (1971): Introduction to Stochastic Control. - New York: Holt, Rinehart and Winston.
  • Lakshmikantham S. Leela and Martynyuk A.A. (1990): Practical Stability of Nonlinear Systems. - Singapore: World Scientific.
  • Lakshmikantham V., Matrosov V.M. and Sivasundaram S. (1991): Vector Lyapunov Functions and Stability Analysis of Nonlinear Systems. - Dordrecht: Kluwer.
  • Mandl P. and Seneta E. (1969): The theory of non-negative matricesin a dynamic programming problem. - Austral. J. Stat., Vol. 11,pp. 85-96.
  • Nash J. (1951): Non-cooperative games. - Ann. Math., Vol. 54, pp. 287-295.
  • Nash J. (1996): Essays on Game Theory. - Cheltenham: Elgar.
  • Pallu de la Barriere R. (1967): Optimal Control Theory. - Philadelphia: Saunders.
  • Patek S.D. (1997): Stochastic Shortest Path Games: Theory and Algorithms. - Ph.D. thesis, Dep. Electr. Eng. Comput.Sci., MIT, Cambridge, MA.
  • Patek S.D. and Bertsekas D.P. (1999): Stochastic shortest pathgames. - SIAM J. Contr. Optim., Vol. 37, No. 3, pp. 804-824.
  • Patek S.D. (2001): On terminating Markov decision processes with a risk averse objective function. - Automatica, Vol. 37, No. 9, pp. 1379-1386.
  • Pliska S.R. (1978): On the transient case for Markov decision chains with general state space, In: Dynamic Programming and Its Applications, (M.L. Puterman, Ed.). - New York: Springer, pp 335-349.
  • Puterman M.L. (1994): Markov Decision Processes: Discrete Stochastic Dynamic Programming. - New York: Wiley.
  • Rieder U. (1975): Bayesian dynamic programming. - Adv. Appl. Prob., Vol. 7, pp. 330-348.
  • Shapley L.S. (1953): Stochastic Games. - Proc. National. Acad. Sci., Mathematics, Vol. 39, pp. 1095-1100.
  • Shiryaev A.N. (1978): Optimal Stopping Problems. - New York: Springer.
  • Strauch R. (1966): Negative dynamic programming. - Ann. Math.Stat., Vol. 37, pp. 871-890.
  • Van der Wal J. (1981): Stochastic dynamic programming. - Math. Centre Tracts 139, Mathematisch Centrum, Amsterdam.
  • Veinott A.F., Jr. (1969): Discrete dynamic programming with sensitive discount optimality criteria. - Ann. Math. Stat., Vol. 40, No. 5, pp. 1635-1660.
  • Whittle P. (1983): Optimization over Time. - New York: Wiley, Vol. 2.

Typ dokumentu

Bibliografia

Identyfikatory

Identyfikator YADDA

bwmeta1.element.bwnjournal-article-amcv16i3p387bwm
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.