Cited 10 time in
Deep Q-network-based multi-criteria decision-making framework for virtual simulation environment
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Jang, Hyeonjun | - |
| dc.contributor.author | Hao, Shujia | - |
| dc.contributor.author | Chu, Phuong Minh | - |
| dc.contributor.author | Sharma, Pradip Kumar | - |
| dc.contributor.author | Sung, Yunsick | - |
| dc.contributor.author | Cho, Kyungeun | - |
| dc.date.accessioned | 2023-04-27T16:40:20Z | - |
| dc.date.available | 2023-04-27T16:40:20Z | - |
| dc.date.issued | 2021-09 | - |
| dc.identifier.issn | 0941-0643 | - |
| dc.identifier.issn | 1433-3058 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/4502 | - |
| dc.description.abstract | Deep learning improves the realistic expression of virtual simulations specifically to solve multi-criteria decision-making problems, which are generally rely on high-performance artificial intelligence. This study was inspired by the motivation theory and natural life observations. Recently, motivation-based control has been actively studied for realistic expression, but it presents various problems. For instance, it is hard to define the relation among multiple motivations and to select goals based on multiple motivations. Behaviors should generally be practiced to take into account motivations and goals. This paper proposes a deep Q-network (DQN)-based multi-criteria decision-making framework for virtual agents in real time to automatically select goals based on motivations in virtual simulation environments and to plan relevant behaviors to achieve those goals. All motivations are classified according to the five-level Maslow's hierarchy of needs, and the virtual agents train a double DQN by big social data, select optimal goals depending on motivations, and perform behaviors relying on a predefined hierarchical task networks (HTNs). Compared to the state-of-the-art method, the proposed framework is efficient and reduced the average loss from 0.1239 to 0.0491 and increased accuracy from 63.24 to 80.15%. For behavioral performance using predefined HTNs, the number of methods has increased from 35 in the Q network to 1511 in the proposed framework, and the computation time of 10,000 behavior plans reduced from 0.118 to 0.1079 s. | - |
| dc.format.extent | 15 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | SPRINGER LONDON LTD | - |
| dc.title | Deep Q-network-based multi-criteria decision-making framework for virtual simulation environment | - |
| dc.type | Article | - |
| dc.publisher.location | 영국 | - |
| dc.identifier.doi | 10.1007/s00521-020-04918-3 | - |
| dc.identifier.scopusid | 2-s2.0-85084088122 | - |
| dc.identifier.wosid | 000527495500002 | - |
| dc.identifier.bibliographicCitation | NEURAL COMPUTING & APPLICATIONS, v.33, no.17, pp 10657 - 10671 | - |
| dc.citation.title | NEURAL COMPUTING & APPLICATIONS | - |
| dc.citation.volume | 33 | - |
| dc.citation.number | 17 | - |
| dc.citation.startPage | 10657 | - |
| dc.citation.endPage | 10671 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.subject.keywordAuthor | Deep learning | - |
| dc.subject.keywordAuthor | Big data | - |
| dc.subject.keywordAuthor | Motivation system | - |
| dc.subject.keywordAuthor | Behavior planning | - |
| dc.subject.keywordAuthor | Nature inspired algorithm | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
