Model-Based Reinforcement Learning with LSTM Networks for Non-Prehensile Manipulation Planning

Page view(s)
120
Checked on Nov 09, 2024
Model-Based Reinforcement Learning with LSTM Networks for Non-Prehensile Manipulation Planning
Title:
Model-Based Reinforcement Learning with LSTM Networks for Non-Prehensile Manipulation Planning
Journal Title:
2021 21st International Conference on Control, Automation and Systems (ICCAS)
Keywords:
Publication Date:
28 December 2021
Citation:
Fong, J., Campolo, D., Acar, C., & Tee, K. P. (2021). Model-Based Reinforcement Learning with LSTM Networks for Non-Prehensile Manipulation Planning. 2021 21st International Conference on Control, Automation and Systems (ICCAS). doi:10.23919/iccas52745.2021.9649940
Abstract:
Solving non-prehensile manipulation tasks requires domain knowledge involving various interactions such as switching contact dynamics between the robot and the object, and the object-environment interactions. This results in a switched nonlinear dynamic system governing the physical interactions between the object and the environment. In this paper, we propose an interactive learning framework that allows a robot to autonomously learn and model an unknown object's dynamics, as well as utilise the learned model for efficient planning in completing re-positioning tasks using non-prehensile manipulation. First, we model the overall object dynamics using a Long Short-Term Memory (LSTM) neural network. We then assimilate the learned model into the Monte Carlo Tree Search (MCTS) algorithm with a dense reward function to generate an optimal sequence of push actions for task completion. We demonstrate the framework in both simulated and real robot that pushes objects on a table
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the A*STAR - RIE 2020 - Advanced Manufacturing and Engineering
Grant Reference no. : A19E4a0101
Description:
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
ISSN:
978-89-93215-21-2
Files uploaded: