Skip to content

Question on the last transition layer #65

@zlenyk

Description

@zlenyk

Hi! Thank you for publishing this. I'm following pytorch implementation of DenseNet, specifically I'm using densenet161 for extracting features from images. I'm wondering, in your implementation here, why after the last Denseblock are you adding additional transition layer, consisting of batch-normalization and ReLU? I don't see any notion of those operation in the paper. Am I missing something? I'm asking because I'm wondering how those transitions are influencing quality of learnt features when we use DenseNet not for classifying images but rather as feature extractor.
Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions