融合深度强化学习的电力系统柔性负荷协同调度方法研究

Research on coordinated scheduling method of flexible loads in power systems based on deep reinforcement learning

  • 摘要: 在高比例新能源接入背景下,电力系统调度存在复杂性增加、传统方法难以适应源荷双侧随机性和多约束耦合的问题。为有效解决上述问题,有必要对新型技术进行研究。提出一种基于近端策略优化(Proximal Policy Optimization,PPO)算法的深度强化学习调度框架,以实现传统机组与柔性负荷的协同优化,提升系统经济性、新能源消纳能力与运行安全性。融合深度强化学习的电力系统柔性负荷协同调度方法,基于PPO算法设计状态空间、动作空间与奖励机制,实现柔性负荷与机组出力的协调优化。该方法通过动态建模多种运行约束,提升了系统的调度自适应能力。在IEEE 30节点系统上的仿真验证表明,该方法在经济性、新能源消纳与系统安全性等方面表现优越,具备良好的工程推广潜力。

     

    Abstract: With the increasing integration of renewable energy sources, power system dispatching faces growing complexity where traditional methods struggle to accommodate the randomness on both generation and load sides coupled with multiple constraints. To effectively address these challenges, it is essential to explore innovative technologies. This study proposes a deep reinforcement learning-based scheduling framework employing proximal policy optimization (PPO) algorithm, which achieves coordinated optimization between conventional power units and flexible loads, thereby enhancing system efficiency, renewable energy integration capacity, and operational safety. The hybrid approach integrates deep reinforcement learning with PPO algorithm design for state space, action space, and reward mechanism configuration, enabling coordinated optimization between flexible loads and unit output. By dynamically modeling various operational constraints, this method significantly improves system dispatching adaptability. Simulation results on the IEEE 30-node system demonstrate superior performance in economic efficiency, renewable energy integration, and system security, indicating strong potential for practical application.

     

/

返回文章
返回