Cited 8 time in
Toward Developing Fog Decision Making on the Transmission Rate of Various IoT Devices Based on Reinforcement Learning
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Mobasheri, Motahareh | - |
| dc.contributor.author | Kim, Yangwoo | - |
| dc.contributor.author | Kim, Woongsup | - |
| dc.date.accessioned | 2025-01-13T07:00:09Z | - |
| dc.date.available | 2025-01-13T07:00:09Z | - |
| dc.date.issued | 2020-03 | - |
| dc.identifier.issn | 2576-3180 | - |
| dc.identifier.issn | 2576-3199 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/56669 | - |
| dc.description.abstract | In recent years, the focus on reducing the delay and the cost of transferring data to the cloud has led to data processing near end devices. Therefore, fog computing has emerged as a powerful complement to the cloud to handle the large data volume belonging to the Internet of Things (IoT) and the requirements of communications. Over time, because of the increasing number of IoT devices, managing them by a fog node has become more complicated. The problem addressed in this study is the transmission rate of various IoT devices to a fog node in order to prevent delays in emergency cases. We formulate the decision making problem of a fog node by using a reinforcement learning approach in a smart city as an example of a smart environment and then develop a Qlearning algorithm to achieve efficient decisions for IoT transmission rates to the fog node. Although to the best of our knowledge, thus far, there has been no research with this objective, in this study two more approaches, random-based and greedy-based, are simulated to show that our method performs considerably better (over 99.8 percent) than these algorithms. © 2018 IEEE. | - |
| dc.format.extent | 5 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
| dc.title | Toward Developing Fog Decision Making on the Transmission Rate of Various IoT Devices Based on Reinforcement Learning | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/IOTM.0001.1900070 | - |
| dc.identifier.scopusid | 2-s2.0-85093086666 | - |
| dc.identifier.bibliographicCitation | IEEE Internet of Things Magazine, v.3, no.1, pp 38 - 42 | - |
| dc.citation.title | IEEE Internet of Things Magazine | - |
| dc.citation.volume | 3 | - |
| dc.citation.number | 1 | - |
| dc.citation.startPage | 38 | - |
| dc.citation.endPage | 42 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
