loading page

Multi-UAV Energy Consumption Minimization using Deep Reinforcement Learning: An Age of Information Approach
  • Jeena Kim,
  • Seunghyun Park,
  • Hyunhee Park
Jeena Kim
Myongji University - Natural Science Campus
Author Profile
Seunghyun Park
Hansung University
Author Profile
Hyunhee Park
Myongji University - Natural Science Campus

Corresponding Author:hhpark@mju.ac.kr

Author Profile

Abstract

This letter introduces an innovative approach for minimizing energy consumption in multi-UAV (Unmanned Aerial Vehicles) networks using Deep Reinforcement Learning (DRL), with a focus on optimizing the Age of Information (AoI) in disaster environments. We propose a hierarchical UAV deployment strategy that facilitates cooperative trajectory planning, ensuring timely data collection and transmission while minimizing energy consumption. By formulating the inter-UAV network path planning problem as a Markov Decision Process (MDP), we apply a Deep Q-Network (DQN) strategy to enable real-time decision-making that accounts for dynamic environmental changes, obstacles, and UAV battery constraints. Our extensive simulation results, conducted in both rural and urban scenarios, demonstrate the effectiveness of employing a memory access approach within the DQN framework, significantly reducing energy consumption up to 33.25\% in rural settings and 74.20\% in urban environments compared to non-memory approaches. By integrating AoI considerations with energy-efficient UAV control, this work offers a robust solution for maintaining fresh data in critical applications, such as disaster response, where ground-based communication infrastructures are compromised. The use of replay memory approach, particularly the online history approach, proves crucial in adapting to changing conditions and optimizing UAV operations for both data freshness and energy consumption.
07 Mar 2024Submitted to Electronics Letters
11 Mar 2024Submission Checks Completed
11 Mar 2024Assigned to Editor
19 Mar 2024Reviewer(s) Assigned
04 Apr 2024Review(s) Completed, Editorial Evaluation Pending
20 May 2024Editorial Decision: Revise Major
20 Jul 20241st Revision Received
03 Aug 2024Submission Checks Completed
03 Aug 2024Assigned to Editor
03 Aug 2024Review(s) Completed, Editorial Evaluation Pending
03 Aug 2024Reviewer(s) Assigned
07 Oct 2024Editorial Decision: Accept