Skip to content

Performance drop with 1 batch size per GPU #3

@zx-bit

Description

@zx-bit

Excellent work! But I encountered an issue while trying to reproduce your results. When utilizing 4 GPUs with a batch size of 2 per GPU, I successfully replicated the results, about 39 mIoU. Nevertheless, when employing 8 GPUs with a batch size of 1 per GPU, it yielded significantly inferior results, about 20+ mIoU. The only change I made was replacing BatchNorm with SyncBatchNorm. What might be the underlying cause of this problem, and how could it be resolved?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions