Skip to content

Experiments on GraspAnything++ #3

@vineet2104

Description

@vineet2104

Dear Authors, I had a couple of questions regarding the code.

  1. The current dataset loader - GraspAnythingDataset in data/grasp_anything_data.py only covers RGB, Masks and GT Grasp rectangles. Your paper mentions that you utilize grounding dino for generating bounding boxes using language followed by passing these boxes to GraspSAM for grasp detection. Can you share the code files for that as well? I would like to reproduce your results on the GraspAnything++ dataset.

  2. I tried to train GraspSAM on the GraspAnything dataset. As per my understanding from model/utils.py, you use the ground truth object masks to sample points for input to GraspSAM. Is that correct?

Thank you.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions