Skip to content

feat: add LM Studio headless server support#53

Open
vlordier wants to merge 2 commits intoLiquid4All:mainfrom
vlordier:feat/lmstudio-support
Open

feat: add LM Studio headless server support#53
vlordier wants to merge 2 commits intoLiquid4All:mainfrom
vlordier:feat/lmstudio-support

Conversation

@vlordier
Copy link
Copy Markdown

@vlordier vlordier commented Mar 6, 2026

Summary

  • Add LM Studio runtime configuration to _models/config.yaml
  • Add example model config for LM Studio (default port 1234)
  • Add unit tests for LM Studio model and runtime config

Changes

  • examples/localcowork/_models/config.yaml: Added lmstudio runtime and model
  • examples/localcowork/src-tauri/src/inference/config.rs: Added 2 tests
  • examples/localcowork/src-tauri/src/inference/client.rs: Added 1 test + fixture

Testing

  • All 363 Rust unit tests pass
  • Clippy clean

Note

LM Studio must be started manually from the app GUI (no CLI for headless mode).

Copilot AI review requested due to automatic review settings March 6, 2026 11:58
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds configuration and test scaffolding to support using an LM Studio OpenAI-compatible local server (default http://localhost:1234/v1) within the LocalCowork example app.

Changes:

  • Added LM Studio model + runtime entries to the LocalCowork YAML configs.
  • Added unit tests to ensure LM Studio model configs deserialize and can be selected by the inference client.
  • Added a new top-level examples/localcowork/config.yaml config file (in addition to _models/config.yaml).

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.

File Description
examples/localcowork/src-tauri/src/inference/config.rs Adds LM Studio-focused deserialization tests (and minor formatting changes).
examples/localcowork/src-tauri/src/inference/client.rs Extends test fixture with an LM Studio model and adds a targeted constructor test.
examples/localcowork/config.yaml Introduces a top-level YAML config containing models + runtimes + fallback chain.
examples/localcowork/_models/config.yaml Adds LM Studio model and runtimes.lmstudio configuration; also includes formatting/tidy updates.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread examples/localcowork/src-tauri/src/inference/config.rs Outdated
Comment thread examples/localcowork/config.yaml Outdated
Comment on lines +393 to +398
lmstudio:
# Note: LM Studio must be started manually from the app GUI.
# 1. Open LM Studio, load a model, click "Start Server" (headless mode).
# 2. Default port is 1234. Uses OpenAI-compatible API.
health_check: "http://localhost:1234/v1/models"
startup_timeout_seconds: 30
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new runtimes.lmstudio section is currently ignored by the Rust loader because ModelsConfig has no runtimes field and Serde will drop unknown keys. If this is meant to drive runtime startup/health-check behavior, it should be added to ModelsConfig (and used) or the YAML should explicitly document that the runtimes map is informational-only so readers don’t assume it affects behavior.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot open a new pull request to apply changes based on this feedback

@vlordier vlordier force-pushed the feat/lmstudio-support branch 5 times, most recently from f776c6a to 3353bc9 Compare March 6, 2026 12:40
- Add lmstudio runtime configuration in _models/config.yaml
- Add example model config for LM Studio (port 1234)
- Add unit tests for lmstudio model and runtime config
- Verify inference client works with LM Studio endpoints

Note: LM Studio must be started manually from the app GUI.
The runtime config does not include command/args since there's no
valid CLI for headless mode - users start it from the app.
@vlordier vlordier force-pushed the feat/lmstudio-support branch from 3353bc9 to 8b34513 Compare March 6, 2026 12:42
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants