Skip to content

Release v2.0.0

Latest

Choose a tag to compare

@LinB203 LinB203 released this 23 Oct 08:03
e679a26

🚀 Introducing UniWorld-V2: Reinforce Image Editing with Diffusion Negative-aware Finetuning and MLLM Implicit Feedback!
🌟 Surpassing GPT-Image-1 on multiple benchmarks, it showcases superior fine-grained control and complex language capabilities! Now, the new model, training framework, and evaluation results are fully open-source!

✨ Key Highlights

  1. 🧠 We introduce UniWorld-R1, the industry's post-training framework for image editing based on Reinforcement Learning (RL) policy optimization. It leverages our novel DiffusionNFT technique for more efficient training and compatibility with high-order samplers.
  2. 🏆 We pioneer the use of a Multi-modal Large Language Model (MLLM) as a training-free reward model. By leveraging its logits output for fine-grained feedback, we significantly improve the model's alignment with human intent.
  3. 🥇 UniWorld-V2 achieves new SOTA results, scoring an impressive 7.83 on GEdit-Bench (surpassing GPT-Image-1's 7.53) and leading on ImgEdit with 4.49, outperforming all known open and closed-source models.
  4. 🎨 We demonstrate unprecedented fine-grained controllability, including mastering complex artistic Chinese characters, achieving precise spatial editing with "Redbox Control," and rendering realistic global light & shadow fusion—capabilities that are challenging for traditional SFT models.

🔭 Future Work

  1. Continue collecting data and explore joint training with Visual Language Models (VLMs).
  2. Integrate higher-resolution semantic encoders or adopt VLM techniques like multi-scale image gridding to increase input image resolution.