Hi authors,
Thank you for the great paper and code.
I am currently trying to reproduce the results of VQGraph. I noticed that when training the Graph Tokenizer (the SAGE encoder with VQ codebook), the classification performance is significantly lower than the standard Vanilla SAGE (the baseline reported in Table 1).
According to the paper (Table 4, "Only-VQ"), the VQ-enhanced Teacher is expected to perform better than the standard GNN. However, in my experiments (using the parameters from Table 12), adding the VQ layer and reconstruction losses seems to degrade the accuracy.
How to deal with it?
Hi authors,
Thank you for the great paper and code.
I am currently trying to reproduce the results of VQGraph. I noticed that when training the Graph Tokenizer (the SAGE encoder with VQ codebook), the classification performance is significantly lower than the standard Vanilla SAGE (the baseline reported in Table 1).
According to the paper (Table 4, "Only-VQ"), the VQ-enhanced Teacher is expected to perform better than the standard GNN. However, in my experiments (using the parameters from Table 12), adding the VQ layer and reconstruction losses seems to degrade the accuracy.
How to deal with it?