Author : Yeuching Li
Publisher : Morgan & Claypool Publishers
ISBN 13 : 1636393020
Total Pages : 135 pages
Book Rating : 4.6/5 (363 download)
Book Synopsis Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles by : Yeuching Li
Download or read book Deep Reinforcement Learning-based Energy Management for Hybrid Electric Vehicles written by Yeuching Li and published by Morgan & Claypool Publishers. This book was released on 2022-02-14 with total page 135 pages. Available in PDF, EPUB and Kindle. Book excerpt: The urgent need for vehicle electrification and improvement in fuel efficiency has gained increasing attention worldwide. Regarding this concern, the solution of hybrid vehicle systems has proven its value from academic research and industry applications, where energy management plays a key role in taking full advantage of hybrid electric vehicles (HEVs). There are many well-established energy management approaches, ranging from rules-based strategies to optimization-based methods, that can provide diverse options to achieve higher fuel economy performance. However, the research scope for energy management is still expanding with the development of intelligent transportation systems and the improvement in onboard sensing and computing resources. Owing to the boom in machine learning, especially deep learning and deep reinforcement learning (DRL), research on learning-based energy management strategies (EMSs) is gradually gaining more momentum. They have shown great promise in not only being capable of dealing with big data, but also in generalizing previously learned rules to new scenarios without complex manually tunning. Focusing on learning-based energy management with DRL as the core, this book begins with an introduction to the background of DRL in HEV energy management. The strengths and limitations of typical DRL-based EMSs are identified according to the types of state space and action space in energy management. Accordingly, value-based, policy gradient-based, and hybrid action space-oriented energy management methods via DRL are discussed, respectively. Finally, a general online integration scheme for DRL-based EMS is described to bridge the gap between strategy learning in the simulator and strategy deployment on the vehicle controller.