Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 6

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last

Wyniki wyszukiwania

Wyszukiwano:
w słowach kluczowych:  dimension reduction
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
100%
EN
Dimension reduction and feature selection are fundamental tools for machine learning and data mining. Most existing methods, however, assume that objects are represented by a single vectorial descriptor. In reality, some description methods assign unordered sets or graphs of vectors to a single object, where each vector is assumed to have the same number of dimensions, but is drawn from a different probability distribution. Moreover, some applications (such as pose estimation) may require the recognition of individual vectors (nodes) of an object. In such cases it is essential that the nodes within a single object remain distinguishable after dimension reduction. In this paper we propose new discriminant analysis methods that are able to satisfy two criteria at the same time: separating between classes and between the nodes of an object instance. We analyze and evaluate our methods on several different synthetic and real-world datasets.
2
Content available remote

Research Article. On memory, dimension, and atmospheric teleconnections

100%
EN
Using reanalysed atmospheric data and applying a data-driven multiscale approximation to non-stationary dynamical processes, we undertake a systematic examination of the role of memory and dimensionality in defining the quasi-stationary states of the troposphere over the recent decades. We focus on the role of teleconnections characterised by either zonally-oriented wave trains or meridional dipolar structures. We consider the impact of various strategies for dimension reduction based on principal component analysis, diagonalization and truncation.We include the impact of memory by consideration of Bernoulli, Markovian and non-Markovian processes. We a priori explicitly separate barotropic and baroclinic processes and then implement a comprehensive sensitivity analysis to the number and type of retained modes. Our results show the importance of explicitly mitigating the deleterious impacts of signal degradation through ill-conditioning and under sampling in preference to simple strategies based on thresholds in terms of explained variance. In both hemispheres, the results obtained for the dominant tropospheric modes depend critically on the extent to which the higher order modes are retained, the number of free model parameters to be fitted, and whether memory effects are taken into account. Our study identifies the primary role of the circumglobal teleconnection pattern in both hemispheres for Bernoulli and Markov processes, and the transient nature and zonal structure of the Southern Hemisphere patterns in relation to their Northern Hemisphere counterparts. For both hemispheres, overfitted models yield structures consistent with the major teleconnection modes (NAO, PNA and SAM), which give way to zonally oriented wavetrains when either memory effects are ignored or where the dimension is reduced via diagonalising. Where baroclinic processes are emphasised, circumpolar wavetrains are manifest.
3
Content available remote

Random projection RBF nets for multidimensional density estimation

88%
EN
The dimensionality and the amount of data that need to be processed when intensive data streams are observed grow rapidly together with the development of sensors arrays, CCD and CMOS cameras and other devices. The aim of this paper is to propose an approach to dimensionality reduction as a first stage of training RBF nets. As a vehicle for presenting the ideas, the problem of estimating multivariate probability densities is chosen. The linear projection method is briefly surveyed. Using random projections as the first (additional) layer, we are able to reduce the dimensionality of input data. Bounds on the accuracy of RBF nets equipped with a random projection layer in comparison to RBF nets without dimensionality reduction are established. Finally, the results of simulations concerning multidimensional density estimation are briefly reported.
EN
The paper deals with the issue of reducing the dimension and size of a data set (random sample) for exploratory data analysis procedures. The concept of the algorithm investigated here is based on linear transformation to a space of a smaller dimension, while retaining as much as possible the same distances between particular elements. Elements of the transformation matrix are computed using the metaheuristics of parallel fast simulated annealing. Moreover, elimination of or a decrease in importance is performed on those data set elements which have undergone a significant change in location in relation to the others. The presented method can have universal application in a wide range of data exploration problems, offering flexible customization, possibility of use in a dynamic data environment, and comparable or better performance with regards to the principal component analysis. Its positive features were verified in detail for the domain's fundamental tasks of clustering, classification and detection of atypical elements (outliers).
5
Content available remote

Analysis of correlation based dimension reduction methods

88%
EN
Dimension reduction is an important topic in data mining and machine learning. Especially dimension reduction combined with feature fusion is an effective preprocessing step when the data are described by multiple feature sets. Canonical Correlation Analysis (CCA) and Discriminative Canonical Correlation Analysis (DCCA) are feature fusion methods based on correlation. However, they are different in that DCCA is a supervised method utilizing class label information, while CCA is an unsupervised method. It has been shown that the classification performance of DCCA is superior to that of CCA due to the discriminative power using class label information. On the other hand, Linear Discriminant Analysis (LDA) is a supervised dimension reduction method and it is known as a special case of CCA. In this paper, we analyze the relationship between DCCA and LDA, showing that the projective directions by DCCA are equal to the ones obtained from LDA with respect to an orthogonal transformation. Using the relation with LDA, we propose a new method that can enhance the performance of DCCA. The experimental results show that the proposed method exhibits better classification performance than the original DCCA.
6
Content available remote

Directional representation of data in Linear Discriminant Analysis

75%
EN
Sometimes feature representations of measured individuals are better described by spherical coordinates than Cartesian ones. The author proposes to introduce a preprocessing step in LDA based on the arctangent transformation of spherical coordinates. This nonlinear transformation does not change the dimension of the data, but in combination with LDA it leads to a dimension reduction if the raw data are not linearly separated. The method is presented using various examples of real and artificial data.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.