Events

Lecture|Augmented Lagrangian-based Safe Reinforcement Learning Algorithm for Carbon-Oriented Optimal Scheduling of EV Aggregators

发布时间:2024-02-29

SpeakerXiaoying Shi,Tsinghua University

Time9:00 - 10:00, February 29, 2024 (Beijing Time)

Location202, Yue-Kong Pao Library's Annex

Abstract

This paper proposes an augmented Lagrangian-based safe off-policy deep reinforcement learning (DRL) algorithm for the carbon-oriented optimal scheduling of electric vehicle (EV) aggregators in a distribution network. First, practical charging data are employed to formulate an EV aggregation model, and its flexibility in both emission mitigation and energy/power dispatching is demonstrated. Second, a two-layer optimization model is formulated for EV aggregators to participate in day-ahead optimal scheduling, which aims to minimize the total cost without exceeding the given carbon cap. Third, to tackle the nonlinear coupling between the carbon flow and power flow, a bilevel model with a carbon cap constraint is formed as a constrained Markov decision process (CMDP). Finally, the CMDP is efficiently solved by the proposed augmented Lagrangian-based DRL algorithm featuring the soft actor-critic (SAC) method. Comprehensive numerical studies with IEEE distribution test feeders demonstrate that the proposed approach can achieve a fine tradeoff between cost and emission mitigation with a higher computation efficiency compared with the existing DRL methods.

 

Biography

 

Xiaoying Shi is received the B.S. degree in Electric Engineering from North China Electric Power University, China, in 2017, and M.S. degrees in Electrical Engineering from Tsinghua University, China, in 2020.  She is currently pursuing the Ph.D. degree in Tsinghua University, she is co-supervised by Professor Yinliang Xu from Tsinghua University and Professor Javad Lavaei from University of California, Berkeley. Her research interests include transportation electrification, optimization of power systems and carbon-oriented energy system.