Cited 0 time in
Enhanced reinforcement learning by recursive updating of Q-values for reward propagation
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Sung, Y. | - |
| dc.contributor.author | Ahn, E. | - |
| dc.contributor.author | Cho, K. | - |
| dc.date.accessioned | 2024-08-08T04:01:31Z | - |
| dc.date.available | 2024-08-08T04:01:31Z | - |
| dc.date.issued | 2013 | - |
| dc.identifier.issn | 1876-1100 | - |
| dc.identifier.issn | 1876-1119 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/17654 | - |
| dc.description.abstract | In this paper, we propose a method to reduce the learning time of Q-learning by combining the method of updating even to Q-values of unexecuted actions with the method of adding a terminal reward to unvisited Q-values. To verify the method, its performance was compared to that of conventional Q-learning. The proposed approach showed the same performance as conventional Q-learning, with only 27 % of the learning episodes required for conventional Q-learning. Accordingly, we verified that the proposed method reduced learning time by updating more Q-values in the early stage of learning and distributing a terminal reward to more Q-values. © 2013 Springer Science+Business Media. | - |
| dc.format.extent | 6 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.title | Enhanced reinforcement learning by recursive updating of Q-values for reward propagation | - |
| dc.type | Article | - |
| dc.publisher.location | 독일 | - |
| dc.identifier.doi | 10.1007/978-94-007-5860-5_121 | - |
| dc.identifier.scopusid | 2-s2.0-84874175850 | - |
| dc.identifier.bibliographicCitation | Lecture Notes in Electrical Engineering, v.215 LNEE, pp 1003 - 1008 | - |
| dc.citation.title | Lecture Notes in Electrical Engineering | - |
| dc.citation.volume | 215 LNEE | - |
| dc.citation.startPage | 1003 | - |
| dc.citation.endPage | 1008 | - |
| dc.type.docType | Conference Paper | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.subject.keywordAuthor | Propagation | - |
| dc.subject.keywordAuthor | Q-learning | - |
| dc.subject.keywordAuthor | Q-value | - |
| dc.subject.keywordAuthor | Terminal reward | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
