Nonlinear Least Squares Problems Using Approximate Greatest Descent Method

Many numerical methods have been developed and modified to solve nonlinear least squares (NLS) problems as unconstrained optimization problems. One of the challenges regarding the existing numerical methods in solving NLS problems is expensive computation of the Hessian matrix at every iteration. He...

全面介绍

书目详细资料
发表在:Proceedings - 2022 International Conference on Computer and Drone Applications, IConDA 2022
主要作者: 2-s2.0-85146657825
格式: Conference paper
语言:English
出版: Institute of Electrical and Electronics Engineers Inc. 2022
在线阅读:https://www.scopus.com/inward/record.uri?eid=2-s2.0-85146657825&doi=10.1109%2fICONDA56696.2022.10000382&partnerID=40&md5=54db9497ee051201347ecb910204daab
实物特征
总结:Many numerical methods have been developed and modified to solve nonlinear least squares (NLS) problems as unconstrained optimization problems. One of the challenges regarding the existing numerical methods in solving NLS problems is expensive computation of the Hessian matrix at every iteration. Hence, most numerical methods employ a truncated Hessian, which, if the residuals are large, it may not be a good approximation of the Hessian matrix. In this paper, the approximate greatest descent (AGD) method is proposed to solve NLS problems. The algorithm of the AGD method, which uses the full Hessian matrix, is constructed in a logical, rational, and geometrical way. Numerical differentiation is employed to numerically calculate the derivatives of the function. The convergence analysis of the AGD method is following the Lyapunov function theorem whereby monotonic decreasing property and properly nested level sets of the objective function ensures its convergence. Coincidentally, the iterative equation of the AGD method resembles the Levenberg-Marquadrt (LM) method, even though both methods derived differently. Numerical experiments have shown that both AGD and LM methods are able to produce stable trajectories. The AGD method, on the ° ther hand, is more efficient, reliable, and robust because convergence can be achieved with fewer iterations in less time. © 2022 IEEE.
ISSN:
DOI:10.1109/ICONDA56696.2022.10000382