Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 3

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last

Wyniki wyszukiwania

Wyszukiwano:
w słowach kluczowych:  generalization control
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Content available remote

A fuzzy if-then rule-based nonlinear classifier

100%
EN
This paper introduces a new classifier design method that is based on a modification of the classical Ho-Kashyap procedure. The proposed method uses the absolute error, rather than the squared error, to design a linear classifier. Additionally, easy control of the generalization ability and robustness to outliers are obtained. Next, an extension to a nonlinear classifier by the mixture-of-experts technique is presented. Each expert is represented by a fuzzy if-then rule in the Takagi-Sugeno-Kang form. Finally, examples are given to demonstrate the validity of the introduced method.
2
88%
EN
A new learning method tolerant of imprecision is introduced and used in neuro-fuzzy modelling. The proposed method makes it possible to dispose of an intrinsic inconsistency of neuro-fuzzy modelling, where zero-tolerance learning is used to obtain a fuzzy model tolerant of imprecision. This new method can be called ε-insensitive learning, where, in order to fit the fuzzy model to real data, the ε-insensitive loss function is used. ε-insensitive learning leads to a model with minimal Vapnik-Chervonenkis dimension, which results in an improved generalization ability of this system. Another advantage of the proposed method is its robustness against outliers. This paper introduces two approaches to solving ε-insensitive learning problem. The first approach leads to a quadratic programming problem with bound constraints and one linear equality constraint. The second approach leads to a problem of solving a system of linear inequalities. Two computationally efficient numerical methods for ε-insensitive learning are proposed. Finally, examples are given to demonstrate the validity of the introduced methods.
3
Content available remote

Kernel Ho-Kashyap classifier with generalization control

88%
EN
This paper introduces a new classifier design method based on a kernel extension of the classical Ho-Kashyap procedure. The proposed method uses an approximation of the absolute error rather than the squared error to design a classifier, which leads to robustness against outliers and a better approximation of the misclassification error. Additionally, easy control of the generalization ability is obtained using the structural risk minimization induction principle from statistical learning theory. Finally, examples are given to demonstrate the validity of the introduced method.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.