Hierarchical Multi-Agent Reinforcement Learning Method Using Energy Field in Sports Gamesopen access
- Authors
- Lee, Hoshin; Kim, Junoh; Park, Jisun; Chu, Phuong minh; Cho, Kyungeun
- Issue Date
- 2025
- Publisher
- IEEE
- Keywords
- Energy Field; Game Ai; Multi-agent Reinforcement Learning; Sports Game; Decision Making; Dynamics; Intelligent Agents; Machine Learning; Sports; Counterfactuals; Energy Fields; Game Ai; Individual Agent; Multi Agent; Multi-agent Reinforcement Learning; Policy Gradient; Reinforcement Learning Method; Sport Game; Strategic Positioning; Multi Agent Systems
- Citation
- IEEE Access, v.13, pp 166926 - 166942
- Pages
- 17
- Indexed
- SCIE
SCOPUS
- Journal Title
- IEEE Access
- Volume
- 13
- Start Page
- 166926
- End Page
- 166942
- URI
- https://scholarworks.dongguk.edu/handle/sw.dongguk/61709
- DOI
- 10.1109/ACCESS.2025.3613359
- ISSN
- 2169-3536
2169-3536
- Abstract
- This paper proposes an energy-field-based hierarchical multi-agent reinforcement learning method (HES-COMA) for evaluating individual agent contributions and learning efficient policies in dynamic and complex multi-agent environments such as sports games. The proposed method addresses the limitations of the conventional single-layer approach by using energy fields in a global layer to learn strategic positioning, and in a local layer to determine tactical actions (e.g., shooting, stealing, and blocking) from those positions. Specifically, the method assigns an energy value to represent the relative importance of key elements in the game space (ball, opponents, teammates, basket, and shooting probability spots), and builds a dynamically changing energy field depending on the state of play (offense, defense, free scenario, etc.). Experimental results in a commercialized 3vs3 basketball game environment show that HES-COMA achieves approximately 1.5 times faster learning speed than Counterfactual Multi-Agent Policy Gradients (COMA). It also improved the success rates of steals, rebounds, and blocks by factors of 1.38, 1.87, and 2.71, respectively. Moreover, by combining global strategic positioning information with local tactical decision-making, HES-COMA’s movement patterns more closely resemble those of users and FSM-based agents in terms of spatial utilization. Consequently, HES-COMA effectively addresses contribution evaluation and data diversity issue in dynamic multi-agent sports games, thereby boosting both learning efficiency and overall performance. © 2025 Elsevier B.V., All rights reserved.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - ETC > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.