Skip to content

More documentation on the technical improvements used for this data? #6

@BradNeuberg

Description

@BradNeuberg

On the main page README you mention:

"Is there any technical improvement used in this round than previous ones?
To train models for Australia we only had a few thousand building labels, which made it hard to rely only on supervised training. Typically we’ve used hundreds of thousands or best case tens of millions of building labels for training. In order to create a good and robust model for Australia we took advantage of self-supervised training and unsupervised domain adaptation techniques to leverage our training data from other countries and domains. We believe this is a good proof of concept to scale to building extraction to the whole world."

Do you have any papers or more details that you can point to on the self-supervised and unsupervised domain adaption techniques you used?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions