-
Notifications
You must be signed in to change notification settings - Fork 3
Open
Description
Dear Authors, I had a couple of questions regarding the code.
-
The current dataset loader - GraspAnythingDataset in
data/grasp_anything_data.py
only covers RGB, Masks and GT Grasp rectangles. Your paper mentions that you utilize grounding dino for generating bounding boxes using language followed by passing these boxes to GraspSAM for grasp detection. Can you share the code files for that as well? I would like to reproduce your results on the GraspAnything++ dataset. -
I tried to train GraspSAM on the GraspAnything dataset. As per my understanding from
model/utils.py
, you use the ground truth object masks to sample points for input to GraspSAM. Is that correct?
Thank you.
Metadata
Metadata
Assignees
Labels
No labels