Skip to content

Performance drop when training VQ-SAGE (Graph Tokenizer) compared to Vanilla SAGE #14

@Anyna918

Description

@Anyna918

Hi authors,

Thank you for the great paper and code.

I am currently trying to reproduce the results of VQGraph. I noticed that when training the Graph Tokenizer (the SAGE encoder with VQ codebook), the classification performance is significantly lower than the standard Vanilla SAGE (the baseline reported in Table 1).

According to the paper (Table 4, "Only-VQ"), the VQ-enhanced Teacher is expected to perform better than the standard GNN. However, in my experiments (using the parameters from Table 12), adding the VQ layer and reconstruction losses seems to degrade the accuracy.

How to deal with it?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions