Skip to content

feat(foundation): implement Gemini streaming support#1274

Merged
lijingrs merged 1 commit intomofa-org:mainfrom
Bhanudahiyaa:feature/gemini-streaming
Mar 16, 2026
Merged

feat(foundation): implement Gemini streaming support#1274
lijingrs merged 1 commit intomofa-org:mainfrom
Bhanudahiyaa:feature/gemini-streaming

Conversation

@Bhanudahiyaa
Copy link
Copy Markdown
Contributor

@Bhanudahiyaa Bhanudahiyaa commented Mar 15, 2026

Summary

Implements SSE streaming support for the Gemini LLM provider (GeminiProvider::chat_stream) in the mofa-foundation crate. #1272

Motivation

This aligns with GSoC 2026 open task #30 (Streaming response optimization) and Idea #3 (Cognitive Compute Mesh/Multi-provider support). Previously, the Gemini
chat_stream
function returned an explicit ProviderNotSupported error. It now uses reqwest and futures::stream::unfold to parse Gemini SSE streams natively.

Changes

  1. Implemented parse_gemini_sse to parse newline-delimited JSON alt=sse structures.
  2. Extracted shared request body logic into build_request_body.
  3. Updated GeminiProvider::chat_stream to connect to streamGenerateContent?alt=sse.
  4. Added 6 synthetic unit tests to cover SSE parsing, configuration builders, and finish reasons.
  5. Updated supports_streaming to true and the ModelCapabilities accordingly.

Related Issues

Closes #1272

Testing

  • Added unit tests
  • Added integration tests
  • Tested locally (Passed cargo test -p mofa-foundation, cargo clippy, and cargo fmt)

Checklist

  • I have read the CONTRIBUTING.md document.
  • I have run cargo fmt --check and cargo clippy.
  • I have added tests to cover my changes.
  • I have updated the documentation accordingly.

@Bhanudahiyaa Bhanudahiyaa marked this pull request as ready for review March 15, 2026 19:15
@Bhanudahiyaa
Copy link
Copy Markdown
Contributor Author

Bhanudahiyaa commented Mar 16, 2026

Hi @BH3GEI @lijingrs
This PR implements SSE streaming support for the Gemini provider in mofa-foundation, enabling GeminiProvider::chat_stream through the streamGenerateContent?alt=sse endpoint.

This contribution aligns with Open Task #30 (Streaming response optimization) and supports the multi-provider inference abstraction described in Idea 3 (Cognitive Compute Mesh).

Please let me know if the streaming abstraction or parser structure should be adjusted to better fit the provider interface. Happy to iterate based on feedback.

@lijingrs lijingrs merged commit acfc06a into mofa-org:main Mar 16, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature: Implement streaming support for Gemini LLM Provider

2 participants