Vanilla Policy Gradient And Deep Reinforcement Static Task Scheduling In Cloud Computing

Authors

  • P. Ranjani Research Scholar, Cauvery College for Women (Autonomous), Affiliated to Bharathidasan University, Trichy
  • M. Parveen Professor, Cauvery College for Women (Autonomous), Affiliated to Bharathidasan University, Trichy.

Keywords:

Cloud Computing, Task Scheduling, Markov Discrete, Stochastic, Deep Reinforcement Learning

Abstract

As a computing prototype with centralized data processing, cloud computing provides cloud users with the respective services on-demand, thereby permitting numerous devices with constrained potentialities to deliver more complicated applications. Therefore, paramount research problems of Cloud Computing (CC) remain in the efficient resource allocation and task scheduling. With the diversity of CC resources, the physical scattering of processors, new challenges exist while allocating resource and scheduling task in a significant manner.In this paper, task scheduling is considered in the CC scenario, and multiple tasks are scheduled to virtual machines (VMs) configured at the cloud server by maximizing the Cloud User Request Task Satisfaction. The proposed method is called Markov Discrete-time Stochastic Deep Reinforcement Learning-based (MDS-DRL) static task scheduling in Cloud Computing. The problem is formulated as a Markov Discrete-time Stochastic model for which the state space, action space, state transition, and reward are designed. We leverage Vanilla Policy Gradient Update Deep Reinforcement Learning (DRL) to solve both makespan and resource utilization, taking into considering the heterogeneity of the tasks and the heterogeneity of accessible resources. Several experiments have been carried out on the basis of an open source simulator (CloudSim) with the aid of personal cloud data set from NEC personal cloud trace. The experimental results demonstrate the efficiency of the proposed method and the significant results are provided using MDS-DRL static task scheduling in Cloud Computing and other existing task scheduling methods especially in a high independent task scheduling environment. The MDS-DRL method shows a great advantage in terms of makespan, resource utilization and waiting time.

Downloads

Published

2023-10-25

How to Cite

P. Ranjani, & M. Parveen. (2023). Vanilla Policy Gradient And Deep Reinforcement Static Task Scheduling In Cloud Computing. Chinese Journal of Computational Mechanics, (5), 534–545. Retrieved from http://jslxxb.cn/index.php/jslxxb/article/view/4398

Issue

Section

Articles