We propose ZeroGUI, a fully automated online reinforcement learning framework that enables GUI agents to train and adapt in interactive environments at zero human cost.
-
🚫 Zero Human Cost: Requires no handcrafted task annotations or rule-based reward designs.
-
🧠 VLM-based Automation: Both training tasks and rewards are generated by powerful VLMs.
-
♻️ Online Learning: Agents continuously learn from interacting with GUI environments.
-
📈 Significant Gains: +63% (Aguvis) and +14% (UI-TARS) relative improvements on OSWorld.
-
🧠 Automatic Task Generation: Automatically proposes diverse, executable GUI tasks.
-
✅ Automatic Reward Estimation: Assigns binary task rewards based on trajectory screenshots and employs a voting mechanism to avoid hallucinated success.
-
♻️ Two-Stage Online RL: Combines training on generated tasks and test-time adaptation to continually improve agent's performance.
base model | env | 🤗 link |
---|---|---|
UI-TARS-7B-DPO | OSWorld | ZeroGUI-OSWorld-7B |
UI-TARS-7B-DPO | AndroidLab | ZeroGUI-AndroidLab-7B |
If you find this work helpful in your research, please consider citing:
@article{yang2025zerogui,
title={ZeroGUI: Automating Online GUI Learning at Zero Human Cost},
author={Yang, Chenyu and Shiqian, Su and Liu, Shi and Dong, Xuan and Yu, Yue and Su, Weijie and Wang, Xuehui and Liu, Zhaoyang and Zhu, Jinguo and Li, Hao and Wang, Wenhai and Qiao, Yu and Zhu, Xizhou and Dai, Jifeng},
journal={arXiv preprint arXiv:2505.23762},
year={2025}
}
Our code is built with reference to the code of the following projects: OpenRLHF, UI-TARS, AGUVIS, OSWorld, and AndroidLab.