A new learning method tolerant of imprecision is introduced and used in neuro-fuzzy modelling. The proposed method makes it possible to dispose of an intrinsic inconsistency of neuro-fuzzy modelling, where zero-tolerance learning is used to obtain a fuzzy model tolerant of imprecision. This new method can be called ε-insensitive learning, where, in order to fit the fuzzy model to real data, the ε-insensitive loss function is used. ε-insensitive learning leads to a model with minimal Vapnik-Chervonenkis dimension, which results in an improved generalization ability of this system. Another advantage of the proposed method is its robustness against outliers. This paper introduces two approaches to solving ε-insensitive learning problem. The first approach leads to a quadratic programming problem with bound constraints and one linear equality constraint. The second approach leads to a problem of solving a system of linear inequalities. Two computationally efficient numerical methods for ε-insensitive learning are proposed. Finally, examples are given to demonstrate the validity of the introduced methods.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
This paper introduces a new classifier design method that is based on a modification of the classical Ho-Kashyap procedure. The proposed method uses the absolute error, rather than the squared error, to design a linear classifier. Additionally, easy control of the generalization ability and robustness to outliers are obtained. Next, an extension to a nonlinear classifier by the mixture-of-experts technique is presented. Each expert is represented by a fuzzy if-then rule in the Takagi-Sugeno-Kang form. Finally, examples are given to demonstrate the validity of the introduced method.
3
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
This paper introduces a new classifier design method based on a kernel extension of the classical Ho-Kashyap procedure. The proposed method uses an approximation of the absolute error rather than the squared error to design a classifier, which leads to robustness against outliers and a better approximation of the misclassification error. Additionally, easy control of the generalization ability is obtained using the structural risk minimization induction principle from statistical learning theory. Finally, examples are given to demonstrate the validity of the introduced method.
4
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
Fuzzy clustering can be helpful in finding natural vague boundaries in data. The fuzzy c-means method is one of the most popular clustering methods based on minimization of a criterion function. However, one of the greatest disadvantages of this method is its sensitivity to the presence of noise and outliers in the data. The present paper introduces a new ε-insensitive Fuzzy C-Means (εFCM) clustering algorithm. As a special case, this algorithm includes the well-known Fuzzy C-Medians method (FCMED). The performance of the new clustering algorithm is experimentally compared with the Fuzzy C-Means (FCM) method using synthetic data with outliers and heavy-tailed, overlapped groups of the data.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.