EETS: An energy-efficient task scheduler in cloud computing based on improved DQN algorithm

The huge energy consumption of data centers in cloud computing leads to increased operating costs and high carbon emissions to the environment. Deep Reinforcement Learning (DRL) technology combines of deep learning and reinforcement learning, which has an obvious advantage in solving complex task sc...

Full description

Bibliographic Details
Published in:Journal of King Saud University - Computer and Information Sciences
Main Author: Hou H.; Ismail A.
Format: Article
Language:English
Published: King Saud bin Abdulaziz University 2024
Online Access:https://www.scopus.com/inward/record.uri?eid=2-s2.0-85203169651&doi=10.1016%2fj.jksuci.2024.102177&partnerID=40&md5=a911cc63c5c74c0a8db689c38312aa6f
id 2-s2.0-85203169651
spelling 2-s2.0-85203169651
Hou H.; Ismail A.
EETS: An energy-efficient task scheduler in cloud computing based on improved DQN algorithm
2024
Journal of King Saud University - Computer and Information Sciences
36
8
10.1016/j.jksuci.2024.102177
https://www.scopus.com/inward/record.uri?eid=2-s2.0-85203169651&doi=10.1016%2fj.jksuci.2024.102177&partnerID=40&md5=a911cc63c5c74c0a8db689c38312aa6f
The huge energy consumption of data centers in cloud computing leads to increased operating costs and high carbon emissions to the environment. Deep Reinforcement Learning (DRL) technology combines of deep learning and reinforcement learning, which has an obvious advantage in solving complex task scheduling problems. Deep Q Network(DQN)-based task scheduling has been employed for objective optimization. However, training the DQN algorithm may result in value overestimation, which can negatively impact the learning effectiveness. The replay buffer technique, while increasing sample utilization, does not distinguish between sample importance, resulting in limited utilization of valuable samples. This study proposes an enhanced task scheduling algorithm based on the DQN framework, which utilizes a more optimized Dueling-network architecture as well as Double DQN strategy to alleviate the overestimation bias and address the shortcomings of DQN. It also incorporates a prioritized experience replay technique to achieve importance sampling of experience data, which overcomes the problem of low utilization due to uniform sampling from replay memory. Based on these improved techniques, we developed an energy-efficient task scheduling algorithm called EETS (Energy-Efficient Task Scheduling). This algorithm automatically learns the optimal scheduling policy from historical data while interacting with the environment. Experimental results demonstrate that EETS exhibits faster convergence rates and higher rewards compared to both DQN and DDQN. In scheduling performance, EETS outperforms other baseline algorithms in key metrics, including energy consumption, average task response time, and average machine working time. Particularly, it has a significant advantage when handling large batches of tasks. © 2024 The Author(s)
King Saud bin Abdulaziz University
13191578
English
Article
All Open Access; Gold Open Access
author Hou H.; Ismail A.
spellingShingle Hou H.; Ismail A.
EETS: An energy-efficient task scheduler in cloud computing based on improved DQN algorithm
author_facet Hou H.; Ismail A.
author_sort Hou H.; Ismail A.
title EETS: An energy-efficient task scheduler in cloud computing based on improved DQN algorithm
title_short EETS: An energy-efficient task scheduler in cloud computing based on improved DQN algorithm
title_full EETS: An energy-efficient task scheduler in cloud computing based on improved DQN algorithm
title_fullStr EETS: An energy-efficient task scheduler in cloud computing based on improved DQN algorithm
title_full_unstemmed EETS: An energy-efficient task scheduler in cloud computing based on improved DQN algorithm
title_sort EETS: An energy-efficient task scheduler in cloud computing based on improved DQN algorithm
publishDate 2024
container_title Journal of King Saud University - Computer and Information Sciences
container_volume 36
container_issue 8
doi_str_mv 10.1016/j.jksuci.2024.102177
url https://www.scopus.com/inward/record.uri?eid=2-s2.0-85203169651&doi=10.1016%2fj.jksuci.2024.102177&partnerID=40&md5=a911cc63c5c74c0a8db689c38312aa6f
description The huge energy consumption of data centers in cloud computing leads to increased operating costs and high carbon emissions to the environment. Deep Reinforcement Learning (DRL) technology combines of deep learning and reinforcement learning, which has an obvious advantage in solving complex task scheduling problems. Deep Q Network(DQN)-based task scheduling has been employed for objective optimization. However, training the DQN algorithm may result in value overestimation, which can negatively impact the learning effectiveness. The replay buffer technique, while increasing sample utilization, does not distinguish between sample importance, resulting in limited utilization of valuable samples. This study proposes an enhanced task scheduling algorithm based on the DQN framework, which utilizes a more optimized Dueling-network architecture as well as Double DQN strategy to alleviate the overestimation bias and address the shortcomings of DQN. It also incorporates a prioritized experience replay technique to achieve importance sampling of experience data, which overcomes the problem of low utilization due to uniform sampling from replay memory. Based on these improved techniques, we developed an energy-efficient task scheduling algorithm called EETS (Energy-Efficient Task Scheduling). This algorithm automatically learns the optimal scheduling policy from historical data while interacting with the environment. Experimental results demonstrate that EETS exhibits faster convergence rates and higher rewards compared to both DQN and DDQN. In scheduling performance, EETS outperforms other baseline algorithms in key metrics, including energy consumption, average task response time, and average machine working time. Particularly, it has a significant advantage when handling large batches of tasks. © 2024 The Author(s)
publisher King Saud bin Abdulaziz University
issn 13191578
language English
format Article
accesstype All Open Access; Gold Open Access
record_format scopus
collection Scopus
_version_ 1812871793080270848