Enhanced Random Ensemble Mixture: Weight Referring and Mergingopen access
- Authors
- Seo, Yongphil; Sung, Yunsick
- Issue Date
- Feb-2026
- Publisher
- MDPI
- Keywords
- reinforcement learning; deep Q network; random ensemble mixture; weight referring and merging; learning stability; training efficiency
- Citation
- Applied Sciences, v.16, no.4, pp 1 - 14
- Pages
- 14
- Indexed
- SCIE
SCOPUS
- Journal Title
- Applied Sciences
- Volume
- 16
- Number
- 4
- Start Page
- 1
- End Page
- 14
- URI
- https://scholarworks.dongguk.edu/handle/sw.dongguk/63917
- DOI
- 10.3390/app16041738
- ISSN
- 2076-3417
2076-3417
- Abstract
- Featured Application The proposed DQN-based Enhanced Random Ensemble Mixture with Weight Referring and Merging was evaluated on the Catch Game environment. In this experimental setting, it showed more stable learning behavior than a standard DQN, suggesting that the method may be useful for reinforcement learning tasks where stable training is important.Abstract Reinforcement learning (RL) is widely used to learn sequential decision-making policies in complex environments. Deep Q-network (DQN) extends Q-Learning using deep neural networks, enabling learning in high-dimensional state spaces. However, conventional DQN-based approaches can exhibit variability in learning stability and convergence speed even under similar training conditions. Random Ensemble Mixture (REM) has been introduced to improve stability by combining multiple Q-value estimates, but it typically requires running multiple models simultaneously, which increases computational cost. This paper proposes an enhanced DQN method that integrates REM with a Weight Referring and Merging (WRM) mechanism to improve training stability and efficiency. The proposed approach updates a single primary agent using standard DQN learning while maintaining diversity among auxiliary agents by selectively referring to and partially merging weights from the primary network. Q-values from the primary and auxiliary agents are then combined through REM to produce the final value estimate for action selection. Experiments in the Catch Game environment indicate that the proposed method reaches stable performance earlier than a baseline DQN and reduces training time under the tested configuration (approximately 78%). While the results are encouraging in this environment, further evaluation on additional benchmarks is required to assess broader applicability.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - ETC > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.