Skip to content

[CIFAR10] Fine-tuning accuracy is far below the reported value (56.9 % vs 81.0 %) #1

@KT-muscle

Description

@KT-muscle

Overview

Thank you for releasing the code!
Following the README, I fine-tuned DeiT-Small (DINO-pretrained) using the subset
data_selection/subsets/CIFAR10/Density_250_from_ActiveFT50.json
(i.e., 250 labelled images = 0.5 % of CIFAR-10).
However, I obtained Top-1 = 56.93 %, which is much lower than the 81.0 ± 1.2 % reported in the paper.

Reproduction commands

# -------- fine-tuning --------
python -m torch.distributed.launch --nproc_per_node=1 main.py \
  --data-set CIFAR10SUBSET \
  --subset_ids ../data_selection/subsets/CIFAR10/Density_250_from_ActiveFT50.json \
  --resume dino_deitsmall16_pretrain.pth \
  --output_dir Outputs/C10_250 \
  --epochs 1000 \
  --batch-size 512\
  --lr 2.5e-4 \
  --eval_interval 50 \
  --clip-grad 2.0

# -------- evaluation --------
python main.py \
  --data-set CIFAR10 \
  --data-path data \
  --batch-size 512 \
  --resume Outputs/C10_250/best_checkpoint.pth \
  --eval \
  --save_metrics Outputs/C10_250/metrics.json

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions