Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last

Wyniki wyszukiwania

Wyszukiwano:
w słowach kluczowych:  kernel estimators
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Content available remote

Bayes sharpening of imprecise information

100%
EN
A complete algorithm is presented for the sharpening of imprecise information, based on the methodology of kernel estimators and the Bayes decision rule, including conditioning factors. The use of the Bayes rule with a nonsymmetrical loss function enables the inclusion of different results of an under- and overestimation of a sharp value (real number), as well as minimizing potential losses. A conditional approach allows to obtain a more precise result thanks to using information entered as the assumed (e.g. current) values of conditioning factors of continuous andor binary types. The nonparametric methodology of statistical kernel estimators freed the investigated procedure from arbitrary assumptions concerning the forms of distributions characterizing both imprecise information and conditioning random variables. The concept presented here is universal and can be applied in a wide range of tasks in contemporary engineering, economics, and medicine.
2
Content available remote

A complete gradient clustering algorithm formed with kernel estimators

100%
EN
The aim of this paper is to provide a gradient clustering algorithm in its complete form, suitable for direct use without requiring a deeper statistical knowledge. The values of all parameters are effectively calculated using optimizing procedures. Moreover, an illustrative analysis of the meaning of particular parameters is shown, followed by the effects resulting from possible modifications with respect to their primarily assigned optimal values. The proposed algorithm does not demand strict assumptions regarding the desired number of clusters, which allows the obtained number to be better suited to a real data structure. Moreover, a feature specific to it is the possibility to influence the proportion between the number of clusters in areas where data elements are dense as opposed to their sparse regions. Finally, the algorithm-by the detection of oneelement clusters-allows identifying atypical elements, which enables their elimination or possible designation to bigger clusters, thus increasing the homogeneity of the data set.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.