Abstract:
With the increasing integration of renewable energy sources, power system dispatching faces growing complexity where traditional methods struggle to accommodate the randomness on both generation and load sides coupled with multiple constraints. To effectively address these challenges, it is essential to explore innovative technologies. This study proposes a deep reinforcement learning-based scheduling framework employing proximal policy optimization (PPO) algorithm, which achieves coordinated optimization between conventional power units and flexible loads, thereby enhancing system efficiency, renewable energy integration capacity, and operational safety. The hybrid approach integrates deep reinforcement learning with PPO algorithm design for state space, action space, and reward mechanism configuration, enabling coordinated optimization between flexible loads and unit output. By dynamically modeling various operational constraints, this method significantly improves system dispatching adaptability. Simulation results on the IEEE 30-node system demonstrate superior performance in economic efficiency, renewable energy integration, and system security, indicating strong potential for practical application.