Matching workloads to systems with deep reinforcement learning

Loading...
Thumbnail Image
Authors
Hu, Bing
Mason, Nicholas
Advisors
Issue Date
2024-06
Type
Article
Keywords
Deep reinforcement learning , Computer workload , System configuration
Research Projects
Organizational Units
Journal Issue
Citation
Hu, B., & Mason, N. (2024). Matching workloads to systems with deep reinforcement learning. Journal of Management & Engineering Integration, 17(1), 55-63. https://doi.org/10.62704/10057/28084
Abstract

Along with the evolution of computer microarchitecture over the years, the number of dies, cores, and embedded multi-die interconnect bridges has grown. Optimizing the workload running on a central processing unit (CPU) to improve the computer performance has become a challenge. Matching workloads to systems with optimal system configurations to achieve the desired system performance is an open challenge in both academic and industrial research. In this paper, we propose two reinforcement learning (RL) approaches, a deep reinforcement learning (DRL) approach and an evolutionary deep reinforcement learning (EDRL) approach, to find an optimal system configuration for a given computer workload with a system performance objective. The experimental results demonstrate that both approaches can determine the optimal system configuration with the desired performance objective. The comparison studies illustrate that the DRL approach outperforms the standard RL approaches. In the future, these DRL approaches can be leveraged in system performance auto-tuning studies.

Table of Contents
Description
Published in SOAR: Shocker Open Access Repository by the Wichita State University Libraries Technical Services, July 2024.
Publisher
Association for Industry, Engineering and Management Systems (AIEMS)
Journal
Book Title
Series
Journal of Management & Engineering Integration
v.17 no.1
PubMed ID
ISSN
1939-7984
EISSN