Cited 0 time in
Hierarchical Multi-Agent Reinforcement Learning Method Using Energy Field in Sports Games
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Lee, Hoshin | - |
| dc.contributor.author | Kim, Junoh | - |
| dc.contributor.author | Park, Jisun | - |
| dc.contributor.author | Chu, Phuong minh | - |
| dc.contributor.author | Cho, Kyungeun | - |
| dc.date.accessioned | 2025-10-15T01:00:20Z | - |
| dc.date.available | 2025-10-15T01:00:20Z | - |
| dc.date.issued | 2025 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.uri | https://scholarworks.dongguk.edu/handle/sw.dongguk/61709 | - |
| dc.description.abstract | This paper proposes an energy-field-based hierarchical multi-agent reinforcement learning method (HES-COMA) for evaluating individual agent contributions and learning efficient policies in dynamic and complex multi-agent environments such as sports games. The proposed method addresses the limitations of the conventional single-layer approach by using energy fields in a global layer to learn strategic positioning, and in a local layer to determine tactical actions (e.g., shooting, stealing, and blocking) from those positions. Specifically, the method assigns an energy value to represent the relative importance of key elements in the game space (ball, opponents, teammates, basket, and shooting probability spots), and builds a dynamically changing energy field depending on the state of play (offense, defense, free scenario, etc.). Experimental results in a commercialized 3vs3 basketball game environment show that HES-COMA achieves approximately 1.5 times faster learning speed than Counterfactual Multi-Agent Policy Gradients (COMA). It also improved the success rates of steals, rebounds, and blocks by factors of 1.38, 1.87, and 2.71, respectively. Moreover, by combining global strategic positioning information with local tactical decision-making, HES-COMA’s movement patterns more closely resemble those of users and FSM-based agents in terms of spatial utilization. Consequently, HES-COMA effectively addresses contribution evaluation and data diversity issue in dynamic multi-agent sports games, thereby boosting both learning efficiency and overall performance. © 2025 Elsevier B.V., All rights reserved. | - |
| dc.format.extent | 17 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | IEEE | - |
| dc.title | Hierarchical Multi-Agent Reinforcement Learning Method Using Energy Field in Sports Games | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/ACCESS.2025.3613359 | - |
| dc.identifier.scopusid | 2-s2.0-105017406994 | - |
| dc.identifier.wosid | 001586193100017 | - |
| dc.identifier.bibliographicCitation | IEEE Access, v.13, pp 166926 - 166942 | - |
| dc.citation.title | IEEE Access | - |
| dc.citation.volume | 13 | - |
| dc.citation.startPage | 166926 | - |
| dc.citation.endPage | 166942 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Telecommunications | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Telecommunications | - |
| dc.subject.keywordAuthor | Energy Field | - |
| dc.subject.keywordAuthor | Game Ai | - |
| dc.subject.keywordAuthor | Multi-agent Reinforcement Learning | - |
| dc.subject.keywordAuthor | Sports Game | - |
| dc.subject.keywordAuthor | Decision Making | - |
| dc.subject.keywordAuthor | Dynamics | - |
| dc.subject.keywordAuthor | Intelligent Agents | - |
| dc.subject.keywordAuthor | Machine Learning | - |
| dc.subject.keywordAuthor | Sports | - |
| dc.subject.keywordAuthor | Counterfactuals | - |
| dc.subject.keywordAuthor | Energy Fields | - |
| dc.subject.keywordAuthor | Game Ai | - |
| dc.subject.keywordAuthor | Individual Agent | - |
| dc.subject.keywordAuthor | Multi Agent | - |
| dc.subject.keywordAuthor | Multi-agent Reinforcement Learning | - |
| dc.subject.keywordAuthor | Policy Gradient | - |
| dc.subject.keywordAuthor | Reinforcement Learning Method | - |
| dc.subject.keywordAuthor | Sport Game | - |
| dc.subject.keywordAuthor | Strategic Positioning | - |
| dc.subject.keywordAuthor | Multi Agent Systems | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
30, Pildong-ro 1-gil, Jung-gu, Seoul, 04620, Republic of Korea+82-2-2260-3114
Copyright(c) 2023 DONGGUK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
