difficulties creating self supervised learning routine #8345
SantiagoPendon
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi to everyone (This is my first post here :) )
I'm trying to pretrain my SwinUNETR from MONAI that uses blocks with shape (32,32,32) with one channel and 32 in batch.
To pretrain it, I want to use SimCLR technique, with the contrastiveLearning Loss function from MONAI. The routine I've made is the following one:
The loss function fuses the inpainting and the constrastive task:
I was looking the augmentations and are OK.
I used the loss with only ssim and l1 and works
In my first attempt I tried using the hidden states from the encoder to create the z1 and z2 but doesn't work well. Finally I decided to follow the MONAI tutorial:
https://github.com/Project-MONAI/tutorials/blob/main/self_supervised_pretraining/vit_unetr_ssl/ssl_train.ipynb
Please, can anybody provide me any suggestion about what could be wrong? (At this moment the model do not perform the contrastive loss)
Thanks a lot.
Beta Was this translation helpful? Give feedback.
All reactions