loading page

Electric Vehicle Charging Guidance Algorithm Based on Informer Multi-Agent Reinforcement Learning
  • +3
  • Yanyu Zhang,
  • Chunyang Liu,
  • Zhiming Zhang,
  • Feixiang Jiao,
  • Feng Huo,
  • Xibeng Zhang
Yanyu Zhang
Henan University
Author Profile
Chunyang Liu
Henan University
Author Profile
Zhiming Zhang
Henan University
Author Profile
Feixiang Jiao
International Joint Research Laboratory for Cooperative Vehicular Networks of Henan
Author Profile
Feng Huo
Institute of Process Engineering Chinese Academy of Sciences
Author Profile
Xibeng Zhang
Henan University

Corresponding Author:xbzhang@henu.edu.cn

Author Profile

Abstract

With the vigorous development of the electric vehicle (EV) industry, the demand for charging has surged. However, the relative lag in the construction of charging infrastructure has led to a series of problems for drivers, such as difficulty in finding available charging stations and long waiting times for charging. To address this, this paper proposes an EV charging guidance framework based on an Informer network and Multi-Agent Reinforcement Learning (MARL), aiming at achieving efficient EV charging guidance. Firstly, this paper regards charging stations as independent agents, integrating information from vehicles, charging stations, and traffic, transforming the multi-objective optimization problem of EV charging guidance into a multi-agent reinforcement learning task. Then, an Actor-Critic algorithm combined with the Informer is designed, utilizing the Informer in the Critic network to model the interactions between different charging stations, thereby reducing the complexity of policy learning and enhancing coordination among agents. Subsequently, after calculating the advantage function for the agents, the Actor network is updated to improve learning efficiency. The proposed algorithm was simulated and validated in two different EV charging scenarios. The simulation results show that compared with several state-of-the-art methods, our algorithm achieved the best results in multi-objective optimization, demonstrating its superiority and practicality of our proposed algorithm.
05 Aug 2024Submitted to International Journal of Robust and Nonlinear Control
05 Aug 2024Submission Checks Completed
05 Aug 2024Assigned to Editor
05 Aug 2024Review(s) Completed, Editorial Evaluation Pending
08 Aug 2024Reviewer(s) Assigned
04 Nov 2024Editorial Decision: Revise Minor