Learning-Based Joint User Association and Cache Replacement for Cache-Enabled Cloud RANopen access
- Authors
- Jeon, Sang-Eun; Jung, Jae-Wook; Lee, Kisong; Hong, Jun-Pyo
- Issue Date
- May-2024
- Publisher
- IEEE
- Keywords
- Wireless communication; Optimization; Baseband; Training; Simulation; Markov decision processes; Deep reinforcement learning; Cache replacement; C-RAN; DRL; mobile edge caching; user association
- Citation
- IEEE Open Journal of the Communications Society, v.5, pp 3038 - 3049
- Pages
- 12
- Indexed
- SCOPUS
ESCI
- Journal Title
- IEEE Open Journal of the Communications Society
- Volume
- 5
- Start Page
- 3038
- End Page
- 3049
- URI
- https://scholarworks.dongguk.edu/handle/sw.dongguk/21983
- DOI
- 10.1109/OJCOMS.2024.3397054
- ISSN
- 2644-125X
2644-125X
- Abstract
- Mobile edge caching is regarded as a promising technology for reducing network latency and alleviating network congestion by efficiently offloading data traffic and computations to cache-enabled edge nodes. To fully leverage the benefits of edge caching, it is essential to jointly optimize caching and communication strategies, accounting for dynamic content request pattern and unstable nature of wireless mobile networks. Motivated by this, we study a joint cache replacement and user association strategy for minimizing the content delivery latency in cache-enabled cloud radio access network (C-RAN) where remote radio heads (RRHs) cache some contents for serving the content request without downloading the requested content from centralized baseband unit (BBU) via fronthaul. Unlike traditional cache placement strategies, our cache replacement facilitates gradual and timely updates while serving user content requests, without imposing additional network overhead. Specifically, whenever a user requests a content, BBU makes decisions on selecting a RRH for serving user request and on replacing the cached data of the selected RRH by taking into account the user location, cache status of RRHs, and impact on subsequent content deliveries. We optimize the selection of RRH to serve user request and the replacement of cached data by formulating a latency minimization problem using Markov Decision Process (MDP). This formulation considers the tradeoff between cache hit ratio and communication reliability. To develop an effective strategy for solving the MDP, we employ a deep reinforcement learning (DRL) algorithm and design a novel neural network structure and input feature map, specifically tailored to our problem domain. Simulation results show that the proposed approach learns effective strategy appropriate to a given environment, thereby outperforming not only the traditional rule-based strategies but also a typical DRL algorithm in terms of average latency. The proposed approach is shown to be relatively robust to time-variant content popularity by quickly adapting to new popularity distribution.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - College of Engineering > Department of Information and Communication Engineering > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.