Fong, J., Campolo, D., Acar, C., & Tee, K. P. (2021). Model-Based Reinforcement Learning with LSTM Networks for Non-Prehensile Manipulation Planning. 2021 21st International Conference on Control, Automation and Systems (ICCAS). doi:10.23919/iccas52745.2021.9649940
Abstract:
Solving non-prehensile manipulation tasks requires domain knowledge involving various interactions such as switching contact dynamics between the robot and the object, and the object-environment interactions. This results in a switched nonlinear dynamic system governing the physical interactions between the object and the environment. In this paper, we propose an interactive learning framework that allows a robot to autonomously learn and model an unknown object's dynamics, as well as utilise the learned model for efficient planning in completing re-positioning tasks using non-prehensile manipulation. First, we model the overall object dynamics using a Long Short-Term Memory (LSTM) neural network. We then assimilate the learned model into the Monte Carlo Tree Search (MCTS) algorithm with a dense reward function to generate an optimal sequence of push actions for task completion. We demonstrate the framework in both simulated and real robot that pushes objects on a table
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the A*STAR - RIE 2020 - Advanced Manufacturing and Engineering
Grant Reference no. : A19E4a0101