A survey of first order stochastic optimization methods and algorithms based adaptive learning rate from a machine learning perspective

Stochastic optimization in machine learning adjusts hyperparameters to reduce cost, which shows difference between actual value of estimated parameter and things predicted by machine learning model. Learning rate regulates the amount of alteration to a model in terms of the predicted error. Tuning l...

Full description

Bibliographic Details
Published in:AIP Conference Proceedings
Main Author: Weijuan S.; Shuib A.; Alwadood Z.
Format: Conference paper
Language:English
Published: American Institute of Physics Inc. 2023
Online Access:https://www.scopus.com/inward/record.uri?eid=2-s2.0-85179818310&doi=10.1063%2f5.0177172&partnerID=40&md5=49e5b9ade56c278e53086eec423de955
id 2-s2.0-85179818310
spelling 2-s2.0-85179818310
Weijuan S.; Shuib A.; Alwadood Z.
A survey of first order stochastic optimization methods and algorithms based adaptive learning rate from a machine learning perspective
2023
AIP Conference Proceedings
2896
1
10.1063/5.0177172
https://www.scopus.com/inward/record.uri?eid=2-s2.0-85179818310&doi=10.1063%2f5.0177172&partnerID=40&md5=49e5b9ade56c278e53086eec423de955
Stochastic optimization in machine learning adjusts hyperparameters to reduce cost, which shows difference between actual value of estimated parameter and things predicted by machine learning model. Learning rate regulates the amount of alteration to a model in terms of the predicted error. Tuning learning rate is a challenging task. A too large learning rate affects the performance of a model while a too small learning rate may never converge at optimal or near-optimal solution. Adaptive learning rate adjusts learning rates based on performance of a model. This paper presents a survey on first-order stochastic optimization algorithms, which are the main choice for machine learning due to their speed across large datasets and simplicity. Stochastic gradient descent method, its variants and mini-batch algorithms will be elaborated. Adaptive learning rate embedded in the stochastic optimization algorithm can be further improved. This paper discusses learning rate adaptation schemes and how these affect stabilization in the value of learning rate which helps stochastic gradient descent to show fast convergence and a high success rate. This paper aims to offer useful insights towards the development of future stochastic optimization algorithms in machine learning. © 2023 Author(s).
American Institute of Physics Inc.
0094243X
English
Conference paper

author Weijuan S.; Shuib A.; Alwadood Z.
spellingShingle Weijuan S.; Shuib A.; Alwadood Z.
A survey of first order stochastic optimization methods and algorithms based adaptive learning rate from a machine learning perspective
author_facet Weijuan S.; Shuib A.; Alwadood Z.
author_sort Weijuan S.; Shuib A.; Alwadood Z.
title A survey of first order stochastic optimization methods and algorithms based adaptive learning rate from a machine learning perspective
title_short A survey of first order stochastic optimization methods and algorithms based adaptive learning rate from a machine learning perspective
title_full A survey of first order stochastic optimization methods and algorithms based adaptive learning rate from a machine learning perspective
title_fullStr A survey of first order stochastic optimization methods and algorithms based adaptive learning rate from a machine learning perspective
title_full_unstemmed A survey of first order stochastic optimization methods and algorithms based adaptive learning rate from a machine learning perspective
title_sort A survey of first order stochastic optimization methods and algorithms based adaptive learning rate from a machine learning perspective
publishDate 2023
container_title AIP Conference Proceedings
container_volume 2896
container_issue 1
doi_str_mv 10.1063/5.0177172
url https://www.scopus.com/inward/record.uri?eid=2-s2.0-85179818310&doi=10.1063%2f5.0177172&partnerID=40&md5=49e5b9ade56c278e53086eec423de955
description Stochastic optimization in machine learning adjusts hyperparameters to reduce cost, which shows difference between actual value of estimated parameter and things predicted by machine learning model. Learning rate regulates the amount of alteration to a model in terms of the predicted error. Tuning learning rate is a challenging task. A too large learning rate affects the performance of a model while a too small learning rate may never converge at optimal or near-optimal solution. Adaptive learning rate adjusts learning rates based on performance of a model. This paper presents a survey on first-order stochastic optimization algorithms, which are the main choice for machine learning due to their speed across large datasets and simplicity. Stochastic gradient descent method, its variants and mini-batch algorithms will be elaborated. Adaptive learning rate embedded in the stochastic optimization algorithm can be further improved. This paper discusses learning rate adaptation schemes and how these affect stabilization in the value of learning rate which helps stochastic gradient descent to show fast convergence and a high success rate. This paper aims to offer useful insights towards the development of future stochastic optimization algorithms in machine learning. © 2023 Author(s).
publisher American Institute of Physics Inc.
issn 0094243X
language English
format Conference paper
accesstype
record_format scopus
collection Scopus
_version_ 1809678015380062208