Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 6

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last

Wyniki wyszukiwania

Wyszukiwano:
w słowach kluczowych:  regression
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Artykuł dostępny w postaci pełnego tekstu - kliknij by otworzyć plik
Content available

On the estimation of the autocorrelation function

100%
EN
The autocorrelation function has a very important role in several application areas involving stochastic processes. In fact, it assumes the theoretical base for Spectral analysis, ARMA (and generalizations) modeling, detection, etc. However and as it is well known, the results obtained with the more current estimates of the autocorrelation function (biased or not) are frequently bad, even when we have access to a large number of points. On the other hand, in some applications, we need to perform fast correlations. The usual estimators do not allow a fast computation, even with the FFT. These facts motivated the search for alternative ways of computing the autocorrelation function. 9 estimators will be presented and a comparison in face to the exact theoretical autocorrelation is done. As we will see, the best is the AR modified Burg estimate.
2
Content available remote

On least squares estimation of Fourier coefficients and of the regression function

80%
EN
The problem of nonparametric function fitting with the observation model $y_i = f(x_i) + η_i$, i=1,...,n, is considered, where $η_i$ are independent random variables with zero mean value and finite variance, and $x_i \in [a,b] \subset \R^1$, i=1,...,n, form a random sample from a distribution with density $ϱ \in L^1[a,b]$ and are independent of the errors $η_i$, i=1,...,n. The asymptotic properties of the estimator $\widehat{f}_{N(n)}(x) = \sum_{k=1}^{N(n)} \widehat{c}_ke_k(x)$ for $f \in L^2[a,b]$ and $\widehat{c}^{N(n)}=( \widehat{c}_1,..., \widehat{c}_{N(n)})^T$ obtained by the least squares method as well as the limits in probability of the estimators $\widehat{c}_k$, k=1,...,N, for fixed N, are studied in the case when the functions $e_k$, k=1,2,..., forming a complete orthonormal system in $L^2\[a,b\]$ are analytic.
3
Content available remote

Consistency of trigonometric and polynomial regression estimators

80%
EN
The problem of nonparametric regression function estimation is considered using the complete orthonormal system of trigonometric functions or Legendre polynomials $e_k$, k=0,1,..., for the observation model $y_i = f(x_i) + η_i $, i=1,...,n, where the $η_i$ are independent random variables with zero mean value and finite variance, and the observation points $x_i\in[a,b]$, i=1,...,n, form a random sample from a distribution with density $ϱ\in L^1[a,b]$. Sufficient and necessary conditions are obtained for consistency in the sense of the errors $\Vert f-\widehat f_N\Vert, \vert f(x)-\widehatf_N(x)\vert$, $x\in[a,b]$, and $E\Vert f-\widehatf_N\Vert^2$ of the projection estimator $\widehat f_N(x) = \sum_{k=0}^N\widehat{c}_ke_k(x)$ for $\widehat{c}_0,\widehat{c}_1,\ldots,\widehat{c}_N$ determined by the least squares method and $f\in L^2[a,b]$.
4
Artykuł dostępny w postaci pełnego tekstu - kliknij by otworzyć plik
Content available

Adaptive trimmed likelihood estimation in regression

80%
EN
In this paper we derive an asymptotic normality result for an adaptive trimmed likelihood estimator of regression starting from initial high breakdownpoint robust regression estimates. The approach leads to quickly and easily computed robust and efficient estimates for regression. A highlight of the method is that it tends automatically in one algorithm to expose the outliers and give least squares estimates with the outliers removed. The idea is to begin with a rapidly computed consistent robust estimator such as the least median of squares (LMS) or least trimmed squares (LTS) or for example the more recent MM estimators of Yohai. Such estimators are now standard in statistics computing packages, for example as in SPLUS or R. In addition to the asymptotics we provide data analyses supporting the new adaptive approach. This approach appears to work well on a number of data sets and is quicker than the related brute force adaptive regression approach described in Clarke (2000). This current approach builds on the work of Bednarski and Clarke (2002) which considered the asymptotics for the location estimator only.
5
70%
EN
A long queue of vehicles at the gate of a marine terminal is a common traffic phenomenon in a port-city, which sometimes causes problems in urban traffic. In order to be able to solve this issue, we firstly need accurate models to estimate such a vehicle queue length. In this paper, we compare the existing methods in a case study, and evaluate their advantages and disadvantages. Particularly, we develop a simulation-based regression model, using the micro traffic simulation software PARAMIC. In simulation, it is found that the queue transient process follows a natural logarithm curve. Then, based on these curves, we develop a queue length estimation model. In the numerical experiment, the proposed model exhibits better estimation accuracy than the other existing methods.
EN
We give a review on the properties and applications of M-estimators with redescending score function. For regression analysis, some of these redescending M-estimators can attain the maximum breakdown point which is possible in this setup. Moreover, some of them are the solutions of the problem of maximizing the efficiency under bounded influence function when the regression coefficient and the scale parameter are estimated simultaneously. Hence redescending M-estimators satisfy several outlier robustness properties. However, there is a problem in calculating the redescending M-estimators in regression. While in the location-scale case, for example, the Cauchy estimator has only one local extremum this is not the case in regression. In regression there are several local minima reflecting several substructures in the data. This is the reason that the redescending M-estimators can be used to detect substructures in data, i.e. they can be used in cluster analysis. If the starting point of the iteration to calculate the estimator is coming from the substructure then the closest minimum corresponds to this substructure. This property can be used to construct an edge and corner preserving smoother for noisy images so that there are applications in image analysis as well.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.