🚀 Introducing UniWorld-V2: Reinforce Image Editing with Diffusion Negative-aware Finetuning and MLLM Implicit Feedback!
🌟 Surpassing GPT-Image-1 on multiple benchmarks, it showcases superior fine-grained control and complex language capabilities! Now, the new model, training framework, and evaluation results are fully open-source!
✨ Key Highlights
- 🧠 We introduce UniWorld-R1, the industry's post-training framework for image editing based on Reinforcement Learning (RL) policy optimization. It leverages our novel DiffusionNFT technique for more efficient training and compatibility with high-order samplers.
- 🏆 We pioneer the use of a Multi-modal Large Language Model (MLLM) as a training-free reward model. By leveraging its logits output for fine-grained feedback, we significantly improve the model's alignment with human intent.
- 🥇 UniWorld-V2 achieves new SOTA results, scoring an impressive 7.83 on GEdit-Bench (surpassing GPT-Image-1's 7.53) and leading on ImgEdit with 4.49, outperforming all known open and closed-source models.
- 🎨 We demonstrate unprecedented fine-grained controllability, including mastering complex artistic Chinese characters, achieving precise spatial editing with "Redbox Control," and rendering realistic global light & shadow fusion—capabilities that are challenging for traditional SFT models.
🔭 Future Work
- Continue collecting data and explore joint training with Visual Language Models (VLMs).
- Integrate higher-resolution semantic encoders or adopt VLM techniques like multi-scale image gridding to increase input image resolution.