Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 7

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last

Wyniki wyszukiwania

Wyszukiwano:
w słowach kluczowych:  Markov chains
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
100%
EN
Our paper considers open populations with arrivals and departures whose elements are subject to periodic reclassifications. These populations will be divided into a finite number of sub-populations. Assuming that: a) entries, reclassifications and departures occur at the beginning of the time units; b) elements are reallocated at equally spaced times; c) numbers of new elements entering at the beginning of the time units are realizations of independent Poisson distributed random variables; we use Markov chains to obtain limit results for the relative sizes of the sub-populations corresponding to the states of the chain. Namely we will obtain conditions for stability of the relative sizes for transient and recurrent states as well as for all states. The existence of such stability corresponds to the existence of a stochastic structure based either on the transient or on the recurrent states or even on all states. We call these structures stochastic vortices because the structure is maintained despite entrances, departures and reallocations.
2
Content available remote

Why the Kemeny Time is a constant

80%
EN
We present a new fundamental intuition forwhy the Kemeny feature of a Markov chain is a constant. This new perspective has interesting further implications.
3
80%
EN
Our research is centred on the stochastic structure of matched open populations, subjected to periodical reclassifications. These populations are divided into sub-populations. In our application we considered two populations of customers of a bank: with and without account manager. Two or more of such population are matched when there is a 1-1 correspondence between their sub-populations and the elements of one of them can go to another, if and only if the same occurs with elements from the corresponding sub-populations of the other. So we have inputs and outputs of elements in the population and along with several sub-populations in which the elements can be placed. It is thus natural to use Markov chains to model these populations. Besides this study connected with Markov chains we show how to carry out Analysis of Variance - like analysis of entries and departures to and from de populations of customers. Our purpose is to study the flows in and out of customers in classes for the two populations and to make research on the influence of the factors year, class and region. We used the Likelihood ratio tests for the hypotheses formulated on the basis of these factors. In our work we verified that major hypotheses were all rejected. This raises the question of what are the effects and interactions truly relevant. Looking for an answer to this problem, we present the first partition to a change in the log Likelihood. This partition is very similar to the analysis of variance for the crossing of the factors that allowed us to use algebraic established results, see Fonseca et al. (2003, 2006), for models with balanced cross.
4
Content available remote

A generalization of Ueno's inequality for n-step transition probabilities

80%
EN
We provide a generalization of Ueno's inequality for n-step transition probabilities of Markov chains in a general state space. Our result is relevant to the study of adaptive control problems and approximation problems in the theory of discrete-time Markov decision processes and stochastic games.
EN
In this paper we present the extraproximal method for computing the Stackelberg/Nash equilibria in a class of ergodic controlled finite Markov chains games. We exemplify the original game formulation in terms of coupled nonlinear programming problems implementing the Lagrange principle. In addition, Tikhonov's regularization method is employed to ensure the convergence of the cost-functions to a Stackelberg/Nash equilibrium point. Then, we transform the problem into a system of equations in the proximal format. We present a two-step iterated procedure for solving the extraproximal method: (a) the first step (the extra-proximal step) consists of a “prediction” which calculates the preliminary position approximation to the equilibrium point, and (b) the second step is designed to find a “basic adjustment” of the previous prediction. The procedure is called the “extraproximal method” because of the use of an extrapolation. Each equation in this system is an optimization problem for which the necessary and efficient condition for a minimum is solved using a quadratic programming method. This solution approach provides a drastically quicker rate of convergence to the equilibrium point. We present the analysis of the convergence as well the rate of convergence of the method, which is one of the main results of this paper. Additionally, the extraproximal method is developed in terms of Markov chains for Stackelberg games. Our goal is to analyze completely a three-player Stackelberg game consisting of a leader and two followers. We provide all the details needed to implement the extraproximal method in an efficient and numerically stable way. For instance, a numerical technique is presented for computing the first step parameter (λ) of the extraproximal method. The usefulness of the approach is successfully demonstrated by a numerical example related to a pricing oligopoly model for airlines companies.
6
Content available remote

Influence of preconditioning and blocking on accuracy in solving Markovian models

70%
EN
The article considers the effectiveness of various methods used to solve systems of linear equations (which emerge while modeling computer networks and systems with Markov chains) and the practical influence of the methods applied on accuracy. The paper considers some hybrids of both direct and iterative methods. Two varieties of the Gauss elimination will be considered as an example of direct methods: the LU factorization method and the WZ factorization method. The Gauss-Seidel iterative method will be discussed. The paper also shows preconditioning (with the use of incomplete Gauss elimination) and dividing the matrix into blocks where blocks are solved applying direct methods. The motivation for such hybrids is a very high condition number (which is bad) for coefficient matrices occuring in Markov chains and, thus, slow convergence of traditional iterative methods. Also, the blocking, preconditioning and merging of both are analysed. The paper presents the impact of linked methods on both the time and accuracy of finding vector probability. The results of an experiment are given for two groups of matrices: those derived from some very abstract Markovian models, and those from a general 2D Markov chain.
7
Content available remote

Directed forests with application to algorithms related to Markov chains

41%
EN
This paper is devoted to computational problems related to Markov chains (MC) on a finite state space. We present formulas and bounds for characteristics of MCs using directed forest expansions given by the Matrix Tree Theorem. These results are applied to analysis of direct methods for solving systems of linear equations, aggregation algorithms for nearly completely decomposable MCs and the Markov chain Monte Carlo procedures.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.