Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last

Wyniki wyszukiwania

help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Content available remote

Quasi-maximum likelihood estimator of Laplace (1, 1) for GARCH models

100%
EN
This paper studies the quasi-maximum likelihood estimator (QMLE) for the generalized autoregressive conditional heteroscedastic (GARCH) model based on the Laplace (1,1) residuals. The QMLE is proposed to the parameter vector of the GARCH model with the Laplace (1,1) firstly. Under some certain conditions, the strong consistency and asymptotic normality of QMLE are then established. In what follows, a real example with Laplace and normal distribution is analyzed to evaluate the performance of the QMLE and some comparison results on the performance are given. In the end the proofs of some theorem are presented.
2
Content available remote

Prediction of time series by statistical learning: general losses and fast rates

52%
EN
We establish rates of convergences in statistical learning for time series forecasting. Using the PAC-Bayesian approach, slow rates of convergence √ d/n for the Gibbs estimator under the absolute loss were given in a previous work [7], where n is the sample size and d the dimension of the set of predictors. Under the same weak dependence conditions, we extend this result to any convex Lipschitz loss function. We also identify a condition on the parameter space that ensures similar rates for the classical penalized ERM procedure. We apply this method for quantile forecasting of the French GDP. Under additional conditions on the loss functions (satisfied by the quadratic loss function) and for uniformly mixing processes, we prove that the Gibbs estimator actually achieves fast rates of convergence d/n. We discuss the optimality of these different rates pointing out references to lower bounds when they are available. In particular, these results bring a generalization the results of [29] on sparse regression estimation to some autoregression.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.