SGSIN: Simultaneous Grasp and Suction Inference Network via Attention-Based Affordance Learning

Page view(s)
0
Checked on
SGSIN: Simultaneous Grasp and Suction Inference Network via Attention-Based Affordance Learning
Title:
SGSIN: Simultaneous Grasp and Suction Inference Network via Attention-Based Affordance Learning
Journal Title:
IEEE Transactions on Industrial Electronics
Keywords:
Publication Date:
31 October 2024
Citation:
W. Wang, H. Zhu and M. H. Ang, "SGSIN: Simultaneous Grasp and Suction Inference Network via Attention-Based Affordance Learning," in IEEE Transactions on Industrial Electronics, vol. 72, no. 5, pp. 4990-5000, May 2025
Abstract:
Universal handling of a wide and diverse range of objects is a grand and long-lasting challenge for robot manipulation. While recent methods have achieved competitive results in grasping a specific set of objects, it still performs unsatisfactorily when faced with cluttered and diverse objects with different characteristics such as material, shape, texture, etc. One critical bottleneck stems from the limited applicability of single gripper modality, which is not capable of handling the complex real-world tasks. In this article, we propose a unified grasping inference framework for multimodality grasping, i.e., the simultaneous grasp and suction inference network (SGSIN). SGSIN utilizes the 3-D scene input to simultaneously predict feasible grasp poses for multiple modalities of grippers as well as to determine the optimal primitive based on a gripper selection module. SGSIN leverages a novel point-level affordance metric to indicate the potential probability of grasping success for each gripper in a unified manner. A novel backbone network is developed to extract strong 3-D feature representations, where the residual block with point and channel attentions is proposed. Our multimodality framework is trained and evaluated on the GraspNet-1B dataset, and the performances achieve the state of the art on respective grasp and suction tasks. Furthermore, we also evaluate the developed pipeline in real-world tasks with cluttered environments. The state-of-the-art performance demonstrates the effectiveness of our proposed framework.
License type:
Publisher Copyright
Funding Info:
There was no specific funding for the research done
Description:
© 2024 IEEE.  Personal use of this material is permitted.  Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
ISSN:
Print ISSN: 0278-0046 Electronic ISSN: 1557-9948
Files uploaded:

File Size Format Action
sgsin-ral-2022.pdf 2.25 MB PDF Open