Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl

PL EN


Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
1999 | 26 | 4 | 363-381

Tytuł artykułu

Sample path average optimality of Markov control processes with strictly unbounded cost

Treść / Zawartość

Warianty tytułu

Języki publikacji

EN

Abstrakty

EN
We study the existence of sample path average cost (SPAC-) optimal policies for Markov control processes on Borel spaces with strictly unbounded costs, i.e., costs that grow without bound on the complement of compact subsets. Assuming only that the cost function is lower semicontinuous and that the transition law is weakly continuous, we show the existence of a relaxed policy with 'minimal' expected average cost and that the optimal average cost is the limit of discounted programs. Moreover, we show that if such a policy induces a positive Harris recurrent Markov chain, then it is also sample path average (SPAC-) optimal. We apply our results to inventory systems and, in a particular case, we compute explicitly a deterministic stationary SPAC-optimal policy.

Rocznik

Tom

26

Numer

4

Strony

363-381

Opis fizyczny

Daty

wydano
1999
otrzymano
1997-04-07
poprawiono
1998-12-15

Twórcy

  • Departamento de Matemáticas, Universidad de Sonora, Blvd. Transversal y Rosales s/n, C.P. 83000, Hermosillo, Sonora, México

Bibliografia

  • A. Arapostathis et al. (1993), Discrete time controlled Markov processes with an average cost criterion: A survey, SIAM J. Control Optim. 31, 282-344.
  • D. P. Bertsekas (1987), Dynamic Programming: Deterministic and Stochastic Models, Prentice-Hall, Englewood Cliffs, NJ.
  • D. P. Bertsekas and S. E. Shreve (1978), Stochastic Optimal Control: The Discrete Time Case, Academic Press, New York.
  • P. Billingsley (1968), Convergence of Probability Measures, Wiley.
  • V. S. Borkar (1991), Topics in Controlled Markov Chains, Pitman Res. Notes Math. Ser. 240, Longman Sci. Tech.
  • R. Cavazos-Cadena and E. Fernández-Gaucherand (1995), Denumerable controlled Markov chains with average reward criterion : sample path optimality, Z. Oper. Res. 41, 89-108.
  • R. M. Dudley (1989), Real Analysis and Probability, Wadsworth & Brooks.
  • P. Hall and C. C. Heyde (1980), Martingale Limit Theory and Its Application, Academic Press.
  • O. Hernández-Lerma (1993), Existence of average optimal policies in Markov control processes with strictly unbounded costs, Kybernetika 29, 1-17.
  • O. Hernández-Lerma and J. B. Lasserre (1995), Invariant probabilities for Feller-Markov chains, J. Appl. Math. Stochastic Anal. 8, 341-345.
  • O. Hernández-Lerma and J. B. Lasserre (1996), Discrete-Time Markov Control Processes: Basic Optimality Criteria, Springer, New York.
  • O. Hernández-Lerma and J. B. Lasserre (1997), Policy iteration for average cost Markov control processes on Borel spaces, Acta Appl. Math., to appear.
  • O. Hernández-Lerma and M. Muñoz-de-Osak (1992), Discrete-time Markov con- trol processes with discounted unbounded cost: optimality criteria Kybernetika 28, 191-212.
  • O. Hernández-Lerma, O. Vega-Amaya and G. Carrasco (1998), Sample-path optimality and variance-minimization of average cost Markov control processes, Reporte Interno #236, Departamento de Matemáticas, CINVESTAV-IPN, México City.
  • K. Hinderer (1970), Foundations of Non-Stationary Dynamic Programming with Discrete Time Parameters, Lecture Notes in Oper. Res. and Math. Systems 33, Springer, Berlin.
  • J. B. Lasserre (1997), Sample-path average optimality for Markov control processes, Report No. 97102, LAAS-CNRS, Toulouse.
  • H. L. Lee and S. Nahmias (1993), Single-product, single-location models, in: Logistic of Production and Inventory, S. C. Graves, A. H. G. Rinnooy Kan and P. H. Zipkin (eds.), Handbooks in Operations Research and Management Science, Vol. 4, North-Holland, 3-51.
  • P. Mandl and M. Lausmanová (1991), Two extensions of asymptotic methods in controlled Markov chains, Ann. Oper. Res. 28, 67-80.
  • S. P. Meyn (1989), Ergodic theorems for discrete time stochastic systems using a stochastic Lyapunov function, SIAM J. Control Optim. 27, 1409-1439.
  • S. P. Meyn (1995), The policy iteration algorithm for average reward Markov decision processes with general state space, preprint, Coordinated Science Laboratory, University of Illinois, Urbana, IL.
  • S. P. Meyn and R. L. Tweedie (1993), Markov Chains and Stochastic Stability, Springer, London.
  • M. Parlar and R. Rempała (1992), Stochastic inventory problem with piecewise quadratic holding cost function containing a cost-free interval, J. Optim. Theory Appl. 75, 133-153.
  • O. Vega-Amaya and R. Montes-de-Oca (1998), Application of average dynamic programming to inventory systems, Math. Methods Oper. Res. 47, 451-471.

Typ dokumentu

Bibliografia

Identyfikatory

Identyfikator YADDA

bwmeta1.element.bwnjournal-article-zmv26i4p363bwm
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.