-
Notifications
You must be signed in to change notification settings - Fork 116
DALL-E compatible image generation endpoint #292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| return OmniRequestOutput.from_diffusion( | ||
| request_id=request_id, | ||
| images=images, | ||
| prompt=prompt, | ||
| metrics={ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Propagate diffusion errors instead of returning empty success
If DiffusionEngine.step() raises (e.g., during preprocessing/postprocessing) it returns None, but AsyncOmniDiffusion.generate() doesn’t treat that as a failure—it always falls through to build an OmniRequestOutput and the HTTP handlers will return 200 with an empty data array. That masks generation failures and gives clients a successful response even when no images were produced. This path should detect a None/empty result and surface an error instead of returning success.
Useful? React with 👍 / 👎.
48fee5a to
65ab272
Compare
|
We decided in the maintainer's call, with helpful input from Roger Wang (thank you!) to first start with a single endpoint, for v/1/images/generation -- I'll put together that as a next iteration |
3cf6521 to
c981e03
Compare
c981e03 to
f540b9e
Compare
|
alright -- I've gone ahead with a refactor on this PR to address comments from Thursday's maintainer's call. Basically the gist is that I reduced this down to just the /v1/images/generations endpoint and removed the image edit endpoint. There's still a lot to make for the basis of the single endpoint, and there's also a lot of testing and docs. So I broke it out into three commits, with commit messages like [docs], [tests], [feature] so that it's a little easier to review. appreciate the input! |
hsliuustc0106
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please also align with #274
Signed-off-by: Chenguang ZHENG <[email protected]>
…ge_edit Signed-off-by: Chenguang ZHENG <[email protected]>
This commit addresses review comments and fixes the Read the Docs build: * Fix diffusion image output handling (PR comment from chatgpt-codex-connector) * Remove --num-inference-steps from server start examples * Remove unnecessary try/except block around get_hf_file_to_dict import * Add async_diffusion to mkdocs exclude list to prevent vllm import during doc build Signed-off-by: dougbtv <[email protected]>
f540b9e to
ffe73eb
Compare
Add comprehensive documentation for the OpenAI DALL-E compatible image generation API with inline examples and model profiles. Signed-off-by: dougbtv <[email protected]>
Add 29 comprehensive tests covering generation endpoints, model profiles, request validation, and error handling. Signed-off-by: dougbtv <[email protected]>
Implement /v1/images/generations endpoint with: - AsyncOmniDiffusion integration for text-to-image generation - Model profile system for per-model defaults and constraints - Request/response protocol matching OpenAI DALL-E API - Support for Qwen-Image and Z-Image-Turbo models Signed-off-by: dougbtv <[email protected]>
ffe73eb to
a434a65
Compare
|
I've got the branch rebased on main, and I've incorporated the style used in #274 for documentation in my docs update, thanks for letting me know! |
quick overview.
NOTE: This is dependent on the diffusion online serving PR: #259 and builds on it.
cc: @fake0fan (thanks for getting the work off to a great start in 259!)
Example client implementation @ https://github.com/dougbtv/comfyui-vllm-omni/
review tips.
again, this relies on the work in #259, and there are commits here until that lands that are on top of it.
When reviewing, I recommend going by commit, and see the changes broken into:
[docs][testing][feature]so you can isolate just the changes.
the other commits are placeholders for #259
design thoughts.
The idea here is to build on the async API endpoint work that fake0fan did with using the openai completions endpoint, but, to add a diffusion endpoint.
The thought is to add the endpoint, but also a mapping for adding new model support for the endpoint, so that it can be tuned.
The API endpoints are more easily validated, using Pydantic, than the inlined parameters in the completions string. While I believe that is a reasonable action to expect image generation from a completions endpoint while serving multi-modal models, I think it would be nice to have an API endpoint where the parameters can be validated.
...and I want to use it!
overview.
[Feature] Add OpenAI DALL-E compatible image generation API
Builds on @fake0fan's diffusion online serving implementation to provide
a production-ready, OpenAI-compatible image generation API. Implements
the DALL-E /v1/images/generations endpoint with full async support and
proper error handling.
This implementation focuses on generation-only (not editing) to keep
the initial PR manageable while maintaining full functionality and
extensibility.
OpenAI DALL-E API Compatibility:
Unified Async Server:
vllm serve <model> --omnicommand for all diffusion modelsModel Support (via Model Profiles):
Features:
Implementation Files:
Modified:
Built on @fake0fan's excellent diffusion online serving work. This PR
adds the DALL-E compatible API layer with full validation, error
handling, and production-ready features while keeping the scope focused
on generation to facilitate review.