Skip to content

Releases: shcherbak-ai/contextgem

v0.19.0

09 Sep 13:16
d92e69a
Compare
Choose a tag to compare

Added

  • Tool calling support in DocumentLLM.chat(...).

v0.18.0

01 Sep 21:08
06d322f
Compare
Choose a tag to compare

Added

  • Chat: Added optional chat_session parameter (accepts a ChatSession) to preserve message history across turns in DocumentLLM.chat(). When this parameter is omitted, chat is single-turn, without message history.

v0.17.1

26 Aug 21:49
243824f
Compare
Choose a tag to compare

Changed

  • DocxConverter: Conversion speed improved by ~2X, significantly reducing processing time for DOCX files.

v0.17.0

24 Aug 17:00
d3312d5
Compare
Choose a tag to compare

Added

  • Multimodal LLM roles ("extractor_multimodal" and "reasoner_multimodal") to support extraction of multimodal document-level concepts from both text and images. Previously, only text and vision roles were supported, requiring choosing either text or image context for extraction, not both.

v0.16.1

19 Aug 01:36
428feb9
Compare
Choose a tag to compare

Fixed

  • Added support for "minimal" reasoning effort for gpt-5 models.

v0.16.0

19 Aug 00:13
9ca411c
Compare
Choose a tag to compare

Added

  • Reasoning-aware extraction prompts: Automatically enables private chain-of-thought guidance on models that support reasoning, yielding higher-quality outputs (no change for other models).

v0.15.0

13 Aug 22:26
f0f1850
Compare
Choose a tag to compare

Added

  • Auto-pricing for LLMs: enable via auto_pricing=True to automatically estimate costs using pydantic's genai-prices; optional auto_pricing_refresh=True refreshes cached price data at runtime.

Refactor

  • Public API made more consistent and stable: user-facing classes are now thin, well-documented facades over internal implementations. No behavior changes.
  • Internal reorganization for maintainability and future-proofing.

Docs

  • Added guidance for configuring auto-pricing for LLMs.

v0.14.4

08 Aug 19:25
cb133e4
Compare
Choose a tag to compare

Fixed

  • Suppressed noisy LiteLLM proxy missing-dependency error logs (prompting to install litellm[proxy]) emitted by litellm>=1.75.2 during LLM API calls. ContextGem does not require LiteLLM proxy features. Suppression is scoped to LiteLLM loggers.

v0.14.3

07 Aug 22:37
2ed8c93
Compare
Choose a tag to compare

Fixed

  • Enabled reasoning_effort parameter for gpt-5 models by explicitly forwarding it via allowed_openai_params, since litellm.get_supported_openai_params() does not yet include this parameter for gpt-5 models.

v0.14.2

06 Aug 18:05
9a277c7
Compare
Choose a tag to compare

Added

  • Added warning for gpt-oss models used with lm_studio/ provider due to performance issues (according to tests), with a recommendation to use Ollama as a working alternative (e.g., ollama_chat/gpt-oss:20b).