Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 4

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last

Wyniki wyszukiwania

help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Content available remote

Expert knowledge and data analysis for detecting advanced persistent threats

100%
EN
Critical Infrastructures in public administration would be compromised by Advanced Persistent Threats (APT) which today constitute one of the most sophisticated ways of stealing information. This paper presents an effective, learning based tool that uses inductive techniques to analyze the information provided by firewall log files in an IT infrastructure, and detect suspicious activity in order to mark it as a potential APT. The experiments have been accomplished mixing real and synthetic data traffic to represent different proportions of normal and anomalous activity.
2
Content available remote

Bias-variance decomposition in Genetic Programming

64%
EN
We study properties of Linear Genetic Programming (LGP) through several regression and classification benchmarks. In each problem, we decompose the results into bias and variance components, and explore the effect of varying certain key parameters on the overall error and its decomposed contributions. These parameters are the maximum program size, the initial population, and the function set used. We confirm and quantify several insights into the practical usage of GP, most notably that (a) the variance between runs is primarily due to initialization rather than the selection of training samples, (b) parameters can be reasonably optimized to obtain gains in efficacy, and (c) functions detrimental to evolvability are easily eliminated, while functions well-suited to the problem can greatly improve performance-therefore, larger and more diverse function sets are always preferable.
Open Mathematics
|
2016
|
tom 14
|
nr 1
705-722
EN
A major drawback of orthogonal frequency division multiplexing (OFDM) signals is the high value of peak to average power ratio (PAPR). Partial transmit sequences (PTS) is a popular PAPR reduction method with good PAPR reduction performance, but its search complexity is high. In this paper, in order to reduce PTS search complexity we propose a new technique based on biogeography-based optimization (BBO). More specifically, we present a new Generalized Oppositional Biogeography Based Optimization (GOBBO) algorithm which is enhanced with Oppositional Based Learning (OBL) techniques. We apply both the original BBO and the new Generalized Oppositional BBO (GOBBO) to the PTS problem. The GOBBO-PTS method is compared with other PTS schemes for PAPR reduction found in the literature. The simulation results show that GOBBO and BBO are in general highly efficient in producing significant PAPR reduction and reducing the PTS search complexity.
4
Content available remote

Prediction of time series by statistical learning: general losses and fast rates

52%
EN
We establish rates of convergences in statistical learning for time series forecasting. Using the PAC-Bayesian approach, slow rates of convergence √ d/n for the Gibbs estimator under the absolute loss were given in a previous work [7], where n is the sample size and d the dimension of the set of predictors. Under the same weak dependence conditions, we extend this result to any convex Lipschitz loss function. We also identify a condition on the parameter space that ensures similar rates for the classical penalized ERM procedure. We apply this method for quantile forecasting of the French GDP. Under additional conditions on the loss functions (satisfied by the quadratic loss function) and for uniformly mixing processes, we prove that the Gibbs estimator actually achieves fast rates of convergence d/n. We discuss the optimality of these different rates pointing out references to lower bounds when they are available. In particular, these results bring a generalization the results of [29] on sparse regression estimation to some autoregression.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.