Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 4

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last

Wyniki wyszukiwania

Wyszukiwano:
w słowach kluczowych:  Tikhonov regularization
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
In this work, the problem of coil design is studied. It is assumed that the structure of the coil is known (i.e., the positions of simple circular coils are fixed) and the problem is to find current distribution to obtain the required magnetic field in a given region. The unconstrained version of the problem (arbitrary currents are allowed) can be formulated as a Least-SQuares (LSQ) problem. However, the results obtained by solving the LSQ problem are usually useless from the application point of view. Moreover, for higher dimensions the problem is ill-conditioned. To overcome these difficulties, a regularization term is sometimes added to the cost function, in order to make the solution smoother. The regularization technique, however, produces suboptimal solutions. In this work, we propose to solve the problem under study using the constrained Quadratic Programming (QP) method. The methods are compared in terms of the quality of the magnetic field obtained, and the power of the designed coil. Several 1D and 2D examples are considered. It is shown that for the same value of the maximum current the QP method provides solutions with a higher quality magnetic field than the regularization method.
2
100%
EN
Supervised learning methods are powerful techniques to learn a function from a given set of labeled data, the so-called training data. In this paper the support vector machines approach is applied to an image classification task. Starting with the corresponding Tikhonov regularization problem, reformulated as a convex optimization problem, we introduce a conjugate dual problem to it and prove that, whenever strong duality holds, the function to be learned can be expressed via the dual optimal solutions. Corresponding dual problems are then derived for different loss functions. The theoretical results are applied by numerically solving a classification task using high dimensional real-world data in order to obtain optimal classifiers. The results demonstrate the excellent performance of support vector classification for this particular problem.
3
88%
EN
To obtain smooth solutions to ill-posed problems, the standard Tikhonov regularization method is most often used. For the practical choice of the regularization parameter α we can then employ the well-known L-curve criterion, based on the L-curve which is a plot of the norm of the regularized solution versus the norm of the corresponding residual for all valid regularization parameters. This paper proposes a new criterion for choosing the regularization parameter α, based on the so-called U-curve. A comparison of the two methods made on numerical examples is additionally included.
4
75%
EN
Many discrepancy principles are known for choosing the parameter α in the regularized operator equation $(T*T + αI)x_α^δ = T*y^δ$, $|y - y^δ| ≤ δ$, in order to approximate the minimal norm least-squares solution of the operator equation Tx = y. We consider a class of discrepancy principles for choosing the regularization parameter when T*T and $T*y^δ$ are approximated by Aₙ and $zₙ^δ$ respectively with Aₙ not necessarily self-adjoint. This procedure generalizes the work of Engl and Neubauer (1985), and particular cases of the results are applicable to the regularized projection method as well as to a degenerate kernel method considered by Groetsch (1990).
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.