Towards Efficient Task Offloading with Dependency Guarantees in Vehicular Edge Networks through Distributed Deep Reinforcement Learning

Page view(s)
20
Checked on Aug 04, 2025
Towards Efficient Task Offloading with Dependency Guarantees in Vehicular Edge Networks through Distributed Deep Reinforcement Learning
Title:
Towards Efficient Task Offloading with Dependency Guarantees in Vehicular Edge Networks through Distributed Deep Reinforcement Learning
Journal Title:
IEEE Transactions on Vehicular Technology
Publication Date:
11 April 2024
Citation:
H. Liu, W. Huang, D. I. Kim, S. Sun, Y. Zeng and S. Feng, "Towards Efficient Task Offloading With Dependency Guarantees in Vehicular Edge Networks Through Distributed Deep Reinforcement Learning," in IEEE Transactions on Vehicular Technology, vol. 73, no. 9, pp. 13665-13681, Sept. 2024, doi: 10.1109/TVT.2024.3387548.
Abstract:
The proliferation of computation-intensive and delay-sensitive applications in the Internet of Vehicles (IoV) poses great challenges to resource-constrained vehicles. To tackle this issue, Mobile Edge Computing (MEC) enabling offloading on-vehicle tasks to edge servers has emerged as a promising approach. MEC jointly augments network computing capabilities and alleviates resource utilization for IoV, garnering substantial attention. Nevertheless, the efficacy of MEC depends heavily on the adopted offloading scheme, especially in the presence of complex subtask dependencies. Existing research has largely overlooked the crucial dependencies among subtasks, which significantly influence the decision making for offloading. This work attempts to schedule subtasks with guaranteed dependencies while minimizing system latency and energy costs in multi-vehicle scenarios. Firstly, we introduce a subtask priority scheduling method on the basis of the Directed Acyclic Graph (DAG) topological structure to ensure the priority order of subtasks, especially in scenarios with complex interdependencies. Secondly, in light of privacy concerns and limited information sharing, we propose an Optimized Distributed Computation Offloading (ODCO) scheme based on deep reinforcement learning (DRL), alleviating the conventional requirement for extensive vehicle-specific information sharing to achieve optimal offloading performance. The adaptive k-step learning approach is further presented to enhance the robustness of the training process. Numerical experiments are presented to demonstrate the advantages of the proposed scheme regarding the substantial reduction in latency and energy cost and, more importantly, the convergence rate in comparison to the existing state-of-the-art offloading schemes. For instance, the ODCO achieved a system utility of approximately 0.80 within 300 episodes, obtaining utility gains of about 0.05 compared to the distributed earliest-finish time offloading (DEFO) algorithm with around 500 episodes.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the National Research Foundation, Singapore and Infocomm Media Development Authority under its Future Communications Research & Development Programme - FCP
Grant Reference no. : FCP-ASTAR-TG-2022-003
Description:
© 2024 IEEE.  Personal use of this material is permitted.  Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
ISSN:
0018-9545
1939-9359
Files uploaded:

File Size Format Action
xie-final-latex-of-vt-2024-01440.pdf 4.34 MB PDF Open