-
Notifications
You must be signed in to change notification settings - Fork 24
Open
Description
Hi, nice job there, I was trying to replicate some of the results of your paper, and I have just a couple of questions:
What data was it used to train stage 1? I see that the Qwen2vl_dataset is somehow always returning an input image and a generated image. In which cases did you find https://github.com/PKU-YuanGroup/UniWorld-V1/blob/main/train_denoiser.py#L987 being empty?
If stage 1 is not using siglip features, is it fully trained for generative tasks, i.e. prompt with no image with the task being it to get the output image?
I find this bit a bit confusing and I found no reference in the paper to the data being used to train each stage. Could you please provide with some additional details to assist on reproducibility?
Thanks!
Metadata
Metadata
Assignees
Labels
No labels