Skip to content

fix(tui): keep conversation model overrides sticky#1238

Merged
jnjpng merged 1 commit intoletta-ai:mainfrom
Smarty-Pants-Inc:fix/conversation-model-override-sticky
Mar 6, 2026
Merged

fix(tui): keep conversation model overrides sticky#1238
jnjpng merged 1 commit intoletta-ai:mainfrom
Smarty-Pants-Inc:fix/conversation-model-override-sticky

Conversation

@paulbettner
Copy link
Contributor

@paulbettner paulbettner commented Mar 3, 2026

Summary

  • Prevents conversation-scoped /model overrides from being clobbered by the first-chunk agent sync by keeping the override flag ref synchronized.
  • Caches conversation model_settings while an override is active and uses it to derive reasoning effort, fixing missing footer reasoning tags on resume.
  • Fixes the "default" (virtual) conversation case: /model and reasoning tier changes now update the agent config so the next agent sync doesn’t snap back.
  • Adds wiring tests to lock in the override/ref sync and reasoning derivation behavior.

Test plan

  • bun run check
  • bun test src/tests/agent/model-preset-refresh.wiring.test.ts
  • Manual: /model <some-tiered-model> then immediately send a message; ensure the UI does not flip back to the agent's base model.
  • Manual: resume a conversation with a conversation-scoped reasoning override; ensure the footer reasoning tag matches the override.
  • Manual: in the default conversation, switch models and send a message; ensure the model does not snap back after the turn.

👾 Generated with Letta Code

@cpacker cpacker requested a review from jnjpng March 3, 2026 20:55
@jnjpng
Copy link
Contributor

jnjpng commented Mar 5, 2026

@paulbettner How were you observing this bug? E.g. changing model on either non-default/default conversation would immediately snap back after the first message sent after model switch? Or was it changing back non-deterministically after some amount of turns?

@paulbettner
Copy link
Contributor Author

@jnjpng it seemed to be happening after the first model response.

@paulbettner paulbettner force-pushed the fix/conversation-model-override-sticky branch from 43a0adf to 2ea886e Compare March 5, 2026 03:10
Keep the conversation override flag ref synchronized for async callbacks so the
first chunk agent sync can't clobber a /model selection.

Persist /model and reasoning tier changes when running in the virtual "default"
conversation by updating the agent config, preventing snap-back.

Cache conversation model_settings while an override is active so reasoning effort
derivation matches the override on resume.

Also align default conversation API calls with the current client types:
use the agent id at conversations endpoints and avoid passing agent_id params.

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta Code <noreply@letta.com>
@paulbettner paulbettner force-pushed the fix/conversation-model-override-sticky branch from 2ea886e to 8c6320b Compare March 5, 2026 03:14
@jnjpng jnjpng merged commit 5a6d804 into letta-ai:main Mar 6, 2026
13 checks passed
jnjpng added a commit that referenced this pull request Mar 6, 2026
The no-op function placeholders introduced in #1238 silently swallow
calls if the overwrite invariant ever breaks. null! crashes loudly,
which is the safer default for a synchronous forward-reference pattern.

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta Code <noreply@letta.com>
jnjpng added a commit that referenced this pull request Mar 6, 2026
…nore

The no-op function placeholders introduced in #1238 silently swallow
calls if the overwrite invariant ever breaks. null! crashes loudly,
which is the safer default for a synchronous forward-reference pattern.

Also expand the biome-ignore comments on processConversation and
onSubmit to flag that these are blanket suppressions hiding ~16+
omitted deps — safe for refs and stable callbacks, but could mask
genuinely missing reactive deps causing stale-closure bugs.

👾 Generated with [Letta Code](https://letta.com)

Co-Authored-By: Letta Code <noreply@letta.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants