상세 보기
- Lee, Sangjun;
- Kwon, Iljun;
- Park, In-Beom;
- Kim, Kwanho
WEB OF SCIENCE
0SCOPUS
0초록
Operating multi-unit combined heat and power (MUCHP) plants involves determining unit commitment (UC) and coupled heat and power dispatch under demand uncertainty and progressive equipment degradation. This paper proposes a reinforcement learning approach to jointly optimize UC, dispatch, and preventive maintenance (PM). Specifically, we develop a Proximal Policy Optimization (PPO)-based policy that shifts the computational burden to offline training, enabling near-real-time decisions during operation. The trained agent is evaluated on an hourly five-unit CHP system model based on operational data from a district heating plant in the Republic of Korea, using a full-year simulation. The robustness of the proposed method is assessed against demand forecast noise and structural system shifts covering reduced, expanded, homogeneous, and heterogeneous unit configurations. The experiments indicate that the proposed approach reduced the total operating cost by 4.69 to 8.35 percent compared to three heuristic baselines across the evaluated scenarios. Moreover, it mitigates supply shortages during high-volatility seasons through proactive pre-commitment and preserves asset health by distributing production loads evenly. These results indicate that integrating PM into operational planning improves both the economic efficiency and operational stability of MUCHP systems.
키워드
- 제목
- A Deep Reinforcement Learning Approach for Multi-Unit Combined Heat and Power Scheduling with Preventive Maintenance Under Demand Uncertainty
- 저자
- Lee, Sangjun; Kwon, Iljun; Park, In-Beom; Kim, Kwanho
- 발행일
- 2026-04
- 유형
- Article
- 저널명
- Energies
- 권
- 19
- 호
- 8
- 페이지
- 1 ~ 30