ArticleOriginal scientific text
Title
On nearly selfoptimizing strategies for multiarmed bandit problems with controlled arms
Authors 1
Affiliations
- Institute of Computer Science, Białystok Technical University, Wiejska 45a, 15-351 Białystok, Poland
Abstract
Two kinds of strategies for a multiarmed Markov bandit problem with controlled arms are considered: a strategy with forcing and a strategy with randomization. The choice of arm and control function in both cases is based on the current value of the average cost per unit time functional. Some simulation results are also presented.
Keywords
selfoptimizing strategies, adaptative control, invariant measure, multiarmed bandit, stochastic control
Bibliography
- R. Agrawal, Minimizing the learning loss in adaptative control of Markov chains under the weak accessibility condition, J. Appl. Probab. 28 (1991), 779-790.
- R. Agrawal and D. Teneketzis, Certainty equivalence control with forcing: revisited, Systems Control Lett. 13 (1989), 405-412.
- V. Anantharam, P. Varaiya and J. Warland, Asymptotically efficient allocation rules for the multiarmed bandit problem with multiple plays - Part I: i.i.d. rewards, IEEE Trans. Automat. Control AC-32 (11) (1987), 969-977.
- V. Anantharam, P. Varaiya and J. Warland, Asymptotically efficient allocation rules for the multiarmed bandit problem with multiple plays - Part II: Markovian rewards, ibid., 977-983.
- W. Feller, An Introduction to Probability Theory and its Applications, Vol. II, Wiley, New York, 1966.
- J. C. Gittins, Multi-armed Bandit Allocation Indices, Wiley, 1989.
- K. D. Glazebrook, On a sufficient condition for superprocesses due to Whittle, J. Appl. Probab. 19 (1982), 99-110.
- O. Hernández-Lerma, Adaptative Markov Control Processes, Springer, 1989.
- Ł. Stettner, On nearly self-optimizing strategies for a discrete-time uniformly ergodic adaptative model, Appl. Math. Optim. 27 (1993), 161-177.