Deep Reinforcement Learning Based Dynamic Resource Allocation in 5G Networks


Abstract views: 126 / PDF downloads: 46

Authors

  • Zahraa Faris Hamdan Al-Aani Kocaeli University, Kocaeli, Türkiye
  • Arif Dolma Kocaeli University, Kocaeli, Türkiye

DOI:

https://doi.org/10.5281/zenodo.8416084

Keywords:

Deep learning (DL), base station (BS), deep Q-learning (DQL) algorithm

Abstract

The rapid proliferation of 5G technologies necessitates the development of new strategies for managing network resources. Resources are traditionally allocated using heuristic approaches like exhaustive search and genetic algorithms, as well as combinatorial techniques such as branch and bound. Large-scale heterogeneous cellular networks with ultra-dense base station installations, huge connections, and varying QoS needs for distinct classes of users can not benefit from these solutions due to their computational cost. There is a need for a paradigm change from traditional resource allocation algorithms in the fifth generation of wireless networks as a result. Methods for optimizing performance with a low computing cost have been created using data-driven Machine Learning (ML). Deep learning (DL) is a useful technique for training a multi-layer neural network to simulate a resource management method from network data. Thereby avoiding heavy online computations that would otherwise be necessary to solve resource allocation problems might be achieved. For multi-cell wireless networks, we create a deep learning-based resource allocation framework with the goal of increasing total network throughput in this research. We train and test multiple reinforcement learning agents using the deep Q-networks (DQN) algorithm, and the so-called Rain-bow extensions of DQN. The performance of each agent is tested on 5G Urban Macro simulation scenarios, and is benchmarked against a fixed power allocation approach.

References

Li, R., Zhao, Z., Zhou, X., Ding, G., Chen, Y., Wang, Z., ve Zhang, H. (2017). Intelligent 5G: When cellular networks meet artificial intelligence. IEEE Wireless communications, 24(5), 175-183.

Wang, M., Cui, Y., Wang, X., Xiao, S., ve Jiang, J. (2017). Machine learning for networking: Workflow, advances and opportunities. Ieee Network, 32(2), 92-99.

Zhang, C., Patras, P., ve Haddadi, H. (2019). Deep learning in mobile and wireless networking: A survey. IEEE Communications surveys & tutorials, 21(3), 2224-2287.

Luong, N. C., Hoang, D. T., Gong, S., Niyato, D., Wang, P., Liang, Y. C., ve Kim, D. I. (2019). Applications of deep reinforcement learning in communications and networking: A survey. IEEE Communications Surveys & Tutorials, 21(4), 3133-3174.

Osseiran, A., Boccardi, F., Braun, V., Kusume, K., Marsch, P., Maternia, M., ... ve Fallgren, M. (2014). Scenarios for 5G mobile and wireless communications: the vision of the METIS project. IEEE communications magazine, 52(5), 26-35.

Hossain, E., Rasti, M., Tabassum, H., ve Abdelnasser, A. (2014). Evolution toward 5G multi-tier cellular wireless networks: An interference management perspective. IEEE Wireless communications, 21(3), 118-127.

Zander, J. (1992). Performance of optimum transmitter power control in cellular radio systems. IEEE transactions on vehicular technology, 41(1), 57-62.

Foschini, G. J., ve Miljanic, Z. (1993). A simple distributed autonomous power control algorithm and its convergence. IEEE transactions on vehicular Technology, 42(4), 641-646.

Yates, R. D. (1995). A framework for uplink power control in cellular radio systems. IEEE Journal on selected areas in communications, 13(7), 1341-1347.

Sung, C. W., ve Leung, K. K. (2005). A generalized framework for distributed power control in wireless networks. IEEE Transactions on Information Theory, 51(7), 2625-2635.

Boche, H., ve Schubert, M. (2010). A unifying approach to interference modeling for wireless networks. IEEE Transactions on Signal Processing, 58(6), 3282-3297.

Shen, K., ve Yu, W. (2018). Fractional programming for communication systems—Part I: Power control and beamforming. IEEE Transactions on Signal Processing, 66(10), 2616-2630.

Meng, F., Chen, P., Wu, L., ve Wang, X. (2018). Automatic modulation classification: A deep learning enabled approach. IEEE Transactions on Vehicular Technology, 67(11), 10760-10772.

Ye, H., Li, G. Y., ve Juang, B. H. (2017). Power of deep learning for channel estimation and signal detection in OFDM systems. IEEE Wireless Communications Letters, 7(1), 114-117.

Meng, F., Chen, P., ve Wu, L. (2018). NN-based IDF demodulator in band-limited communication system. IET Communications, 12(2), 198-204.

Sun, H., Chen, X., Shi, Q., Hong, M., Fu, X., ve Sidiropoulos, N. D. (2018). Learning to optimize: Training deep neural networks for interference management. IEEE Transactions on Signal Processing, 66(20), 5438-5453.

Liang, F., Shen, C., Yu, W., ve Wu, F. (2019). Towards optimal power control via ensembling deep neural networks. IEEE Transactions on Communications, 68(3), 1760-1776.

Luong, N. C., Hoang, D. T., Gong, S., Niyato, D., Wang, P., Liang, Y. C., ve Kim, D. I. (2019). Applications of deep reinforcement learning in communications and networking: A survey. IEEE Communications Surveys & Tutorials, 21(4), 3133-3174.

Lin, X., Li, J., Baldemair, R., Cheng, J. F. T., Parkvall, S., Larsson, D. C., ... ve Werner, K. (2019). 5G new radio: Unveiling the essentials of the next generation wireless access technology. IEEE Communications Standards Magazine, 3(3), 30-37.

Z Koo, J., Mendiratta, V. B., Rahman, M. R., ve Walid, A. (2019, October). Deep reinforcement learning for network slicing with heterogeneous resource requirements and time varying traffic dynamics. In 2019 15th International Conference on Network and Service Management (CNSM) (pp. 1-5). IEEE.

Galindo-Serrano, A., ve Giupponi, L. (2010). Distributed Q-learning for aggregated interference control in cognitive radio networks. IEEE Transactions on Vehicular Technology, 59(4), 1823-1834.

Simsek, M., Czylwik, A., Galindo-Serrano, A., ve Giupponi, L. (2011, June). Improved decentralized Q-learning algorithm for interference reduction in LTE-femtocells. In 2011 Wireless Advanced (pp. 138-143). IEEE.

Simsek, M., Bennis, M., ve Güvenç, I. (2014). Learning based frequency-and time-domain inter-cell interference coordination in HetNets. IEEE Transactions on Vehicular Technology, 64(10), 4589-4602.

Riedmiller, M. (2005, October). Neural fitted Q iteration–first experiences with a data efficient neural reinforcement learning method. In European conference on machine learning (pp. 317-328). Springer, Berlin, Heidelberg.

Ghadimi, E., Calabrese, F. D., Peters, G., ve Soldati, P. (2017, May). A reinforcement learning approach to power control and rate adaptation in cellular networks. In 2017 IEEE International Conference on Communications (ICC) (pp. 1-7). IEEE.

Nasir, Y. S., ve Guo, D. (2018). Deep reinforcement learning for distributed dynamic power allocation in wireless networks. arXiv preprint arXiv:1808.00490, 8, 2018.

Meng, F., Chen, P., ve Wu, L. (2019, May). Power allocation in multi-user cellular networks with deep Q learning approach. In ICC 2019-2019 IEEE International Conference on Communications (ICC) (pp. 1-6). IEEE.

Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... ve Hassabis, D. (2015). Human-level control through deep reinforcement learning. nature, 518(7540), 529-533.

Williams, R. J. (1992). Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3), 229-256.

Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., ... ve Wierstra, D. (2015). Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971.

Meng, F., Chen, P., Wu, L., ve Cheng, J. (2020). Power allocation in multi-user cellular networks: Deep reinforcement learning approaches. IEEE Transactions on Wireless Communications, 19(10), 6255-6267.

Published

2023-09-25

How to Cite

Al-Aani, Z. F. H., & Dolma , A. (2023). Deep Reinforcement Learning Based Dynamic Resource Allocation in 5G Networks. Euroasia Journal of Mathematics, Engineering, Natural & Medical Sciences, 10(29), 67–84. https://doi.org/10.5281/zenodo.8416084

Issue

Section

Articles