Skip to content

Releases: SomeOddCodeGuy/WilmerAI

v0.5 - Better message variables for prompts, some new nodes, and memory fixes

09 Feb 03:26
4771775

Choose a tag to compare

Summary

NOTE: This introduces new variables to help deprecate variables like "chat_user_prompt_last_twenty". I'm not getting rid of those, for backwards compatibility purposes, but going forward we don't need them as much.

New Workflow Nodes

  • JsonExtractor node: extracts fields from JSON in LLM responses without an additional LLM call
  • TagTextExtractor node: extracts content between XML/HTML-style tags without an additional LLM call

Configurable Prompt Variables

  • nMessagesToIncludeInVariable: node property to control how many messages are included in chat/templated prompt variables
  • estimatedTokensToIncludeInVariable: token-budget-based message selection, accumulates recent messages up to a token limit
  • minMessagesInVariable + maxEstimatedTokensInVariable: combo mode pulling a minimum message count then filling up to a token budget

Token Estimation

  • Recalibrated rough_estimate_token_length word ratio (1.538 -> 1.35 tokens/word)
  • Added configurable safety_margin parameter (default 1.10)

Memory System Fixes

  • Fixed file_exists check that was permanently disabling message-threshold triggers for new conversations
  • Fixed off-by-one in trigger comparisons (> to >=)
  • Added HTTP session cleanup via close() to prevent keep-alive connections from blocking llama.cpp slots
  • Split timeouts into (connect, read) tuples
  • Added diagnostic logging for memory trigger decisions

Code Quality

  • Fixed bare except clauses to except Exception in cancellation paths
  • Added prompt-aware info logging for configurable variable slicing

Example Workflow Configs

  • Updated all example workflow JSON files to use new configurable variable syntax

v0.4.1 - Small hotfix for memories

05 Jan 03:53
a437d1e

Choose a tag to compare

What's Changed

  • Corrected an issue with memory system due to recent change removing the imagespecific handlers. by @SomeOddCodeGuy in #82

v0.4 - Workflow collections, bug fixes, test UI, and some simplification

04 Jan 21:26
f9f2a6e

Choose a tag to compare

What's Changed

  • Fix oldest message chunk being silently discarded in memory generation
  • Fix incorrect new message count causing duplicate processing of memorized messages
  • Fix pytest.ini test path case sensitivity

Features:

  • Add shared workflow collections and workflow selection via API model field (/v1/models and /api/tags endpoints)
  • Add workflow node execution summary logging with timing info
  • Add workflowConfigsSubDirectoryOverride for shared workflow folders
  • Add sharedWorkflowsSubDirectoryOverride for custom shared folder names
  • Add {Discussion_Id} and {YYYY_MM_DD} variables for file paths
  • Add variable substitution support for maxResponseSizeInTokens
  • Add web-based setup wizard (setup_wizard_web.py) (this is a WIP and may be temporary/replaced)
  • Add vector memory resumability with per-chunk hash logging

Refactors:

  • Consolidated image handlers into standard handlers (remove ~700 lines)
  • Standardize preset/workflow naming convention (hyphenated)
  • Archive legacy workflows to _archive subdirectories
  • Add pre-configured shared workflow folders

Simplification:

  • Updated preset names to match endpoint names. Now it makes more sense, as you can more easily use presets to make sure each endpoint gets the appropriate settings.
  • The _example_general_workflow is the one stop shop for example productivity workflows, and thanks to the custom workflow system its easier to spin more off easily. You can just drop new folders into _shared within workflows and suddenly have new workflows available to you as models. I'll make a video about this later.
  • Dropped the imagespecific handlers. Finally. Those were something I did early on and I just kept putting off dealing with them, but they always annoyed me. Regular handlers have the image frameworks added in, if they support it.

Tests:

  • Update tests for corrected memory hash behavior
  • Added tests for new workflow override features

v0.3.1

07 Dec 23:14
ac447fc

Choose a tag to compare

What's Changed

Full Changelog: v0.3.0...v0.3.1

v0.3.0 - API swapped, Claude Support added, other fixes

13 Oct 02:50
8b4963b

Choose a tag to compare

  • Added support for the Claude llm_api
  • Replaced Flask exposed runnable api with Eventlet for MacOS/Linux, and Waitress for Windows
  • Fixed the unit tests not running in Windows properly
  • Corrected two places where a slash at the end of the llm_api url and at the end of the ConfigDirectory folder name caused a break
  • Added attempt at proper cancellation ability, where pressing "stop" in open webui or other front-ends will appropriately end a workflow and cascade down to an LLM
    • Some llm apis work with this, some don't. This should appropriately kill Wilmer and its workflows, but an LLM api in the middle of processing a prompt may not be compelled to stop.
  • Added the ability to replace Endpoints and Presets with variables
    • Limited to hardcoded variables at top of workflow, or agentXInputs from parent workflows

v0.2.1 - New nodes, bug fixes, new docs, and first recursive workflow

29 Sep 03:47
e4bb7df

Choose a tag to compare

What's Changed

  • Added new LLM assisted workflow generation document folder. This is a work in progress.

This is still a work in progress, but I have successfully generated a few workflows with this. This is a start in the direction I want to take Wilmer of having its setup and workflow generation be something an LLM can automate easily.

  • Fixed streaming on the static response node
  • Update partial article wiki node to return N number of results
  • Bugfix for thinking tag cleaning. We had a situation where an LLM (magistral 2509) was accounting for thinking tags but not generating any. This resulted in completely empty responses going into agentXOutput, as the whole response was being deleted.
  • Added ArithmeticProcessor node
  • Added Conditional node
  • Added StringConcatenator Node
  • Updated Conditional Workflows to allow a content passthrough on default instead of having to go into a workflow
  • Added POC for recursive workflow, doing a simple coding workflow as an example. There's a wikipedia workflow coming next, but I want to test it a little more before pushing it out.

Full Changelog: v0.2...v0.2.1

v0.2 - 92% Unit Test Code Coverage, and Bug Fixes

22 Sep 04:13
1075b30

Choose a tag to compare

What's Changed

Full Changelog: v0.1.8.2...v0.2

v0.1.8.2 - Quickguide and Documentation Update

16 Sep 02:51
c948d45

Choose a tag to compare

Just updating some docs and a few tweaks to some configs

v0.1.8.1 - urllib3 version bump

15 Sep 01:56
8aa93ca

Choose a tag to compare

Updated urllib3 to 2.5.0 to satisfy 2 dependabot issues and clear out security notifications.

v0.1.8 - New nodes and some bug fixes

14 Sep 05:47
5faae97

Choose a tag to compare

IMPORTANT: This may require deleting and rebuilding the vector memory file for a discussionId. The next message should start regenerating the memories, same as a file would. Just change the discussionId to something else, or back up the .db, if you want to preserve your current memories.

  • Added the new SaveCustomFile node, which just simply allows you to save a string to a text file.
  • Also added StaticResponse node, which returns a constant string or a variable, without making an LLM call.
  • Also fixed an issue with memories where the vector toggle was gatekeeping the file memories from working.
  • Also corrected an issue with vector memories where the lookback turns could cause the memory tracker to get confused and rebuild the whole memory set (required vector memory db rebuild).
  • Also corrected issue with endpoints not properly removing the starting string as expected.
  • Set the default stream back to true for openai api.