|
|
|
|
|
|
|
|
|
|
|
|
|
Robotic dexterous grasping is a challenging prob-lem due to the high degree of freedom (DoF) and complexcontacts of multi-fingered robotic hands. Existing deep re-inforcement learning (DRL) based methods leverage humandemonstrations to reduce sample complexity due to the highdimensional action space with dexterous grasping. However,less attention has been paid to hand-object interaction rep-resentations for high-level generalization. In this paper, wepropose a novel geometric and spatial hand-object interactionrepresentation, named DexRep, to capture dynamic objectshape features and the spatial relations between hands andobjects during grasping. DexRep comprises Occupancy Featurefor rough shapes within sensing range by moving hands, SurfaceFeature for changing hand-object surface distances, and Local-Geo Feature for local geometric surface features most related topotential contacts. Based on the new representation, we proposea dexterous deep reinforcement learning method to learn ageneralizable grasping policy DexRepNet. Experimental resultsshow that our method outperforms baselines using existingrepresentations for robotic grasping dramatically both in graspsuccess rate and convergence speed. It achieves a 93% graspingsuccess rate on seen objects and higher than 80% graspingsuccess rates on diverse objects of unseen categories in bothsimulation and real-world experiments.
Contact: Qingtao Liu, Qi Ye