An algorithm for univariate optimization using a linear lower bounding function is extended to a nonsmooth case by using the generalized gradient instead of the derivative. A convergence theorem is proved under the condition of semismoothness. This approach gives a globally superlinear convergence of algorithm, which is a generalized Newton-type method.
2
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
We provide sufficient convergence conditions for the Secant method of approximating a locally unique solution of an operator equation in a Banach space. The main hypothesis is the gamma condition first introduced in [10] for the study of Newton’s method. Our sufficient convergence condition reduces to the one obtained in [10] for Newton’s method. A numerical example is also provided.
3
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
We re-examine a quadratically convergent method using divided differences of order one in order to approximate a locally unique solution of an equation in a Banach space setting [4, 5, 7]. Recently in [4, 5, 7], using Lipschitz conditions, and a Newton-Kantorovich type approach, we provided a local as well as a semilocal convergence analysis for this method which compares favorably to other methods using two function evaluations such as the Steffensen’s method [1, 3, 13]. Here, we provide an analysis of this method under the gamma condition [6, 7, 19, 20]. In particular, we also show the quadratic convergence of this method. Numerical examples further validating the theoretical results are also provided.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.