Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last

Wyniki wyszukiwania

Wyszukiwano:
w słowach kluczowych:  weak dependence
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Let: \(\mathbf{Y=}\left( \mathbf{Y}_{i}\right)\), where \(\mathbf{Y}_{i}=\left( Y_{i,1},...,Y_{i,d}\right)\), \(i=1,2,\dots \), be a \(d\)-dimensional, identically distributed, stationary, centered process with uniform marginals and a joint cdf \(F\), and \(F_{n}\left( \mathbf{x}\right) :=\frac{1}{n}\sum_{i=1}^{n}\mathbb{I}\left(Y_{i,1}\leq x_{1},\dots ,Y_{i,d}\leq x_{d}\right)\) denote the corresponding empirical cdf. In our work, we prove the almost sure central limit theorem for an empirical process \(B_{n}=\sqrt{n}\left( F_{n}-F\right)\) under some weak dependence conditions due to Doukhan and Louhichi. Some application of the established result to copula processes is also presented.
2
Content available remote

Prediction of time series by statistical learning: general losses and fast rates

63%
EN
We establish rates of convergences in statistical learning for time series forecasting. Using the PAC-Bayesian approach, slow rates of convergence √ d/n for the Gibbs estimator under the absolute loss were given in a previous work [7], where n is the sample size and d the dimension of the set of predictors. Under the same weak dependence conditions, we extend this result to any convex Lipschitz loss function. We also identify a condition on the parameter space that ensures similar rates for the classical penalized ERM procedure. We apply this method for quantile forecasting of the French GDP. Under additional conditions on the loss functions (satisfied by the quadratic loss function) and for uniformly mixing processes, we prove that the Gibbs estimator actually achieves fast rates of convergence d/n. We discuss the optimality of these different rates pointing out references to lower bounds when they are available. In particular, these results bring a generalization the results of [29] on sparse regression estimation to some autoregression.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.