feat: add LM Studio headless server support#53
feat: add LM Studio headless server support#53vlordier wants to merge 2 commits intoLiquid4All:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
Adds configuration and test scaffolding to support using an LM Studio OpenAI-compatible local server (default http://localhost:1234/v1) within the LocalCowork example app.
Changes:
- Added LM Studio model + runtime entries to the LocalCowork YAML configs.
- Added unit tests to ensure LM Studio model configs deserialize and can be selected by the inference client.
- Added a new top-level
examples/localcowork/config.yamlconfig file (in addition to_models/config.yaml).
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| examples/localcowork/src-tauri/src/inference/config.rs | Adds LM Studio-focused deserialization tests (and minor formatting changes). |
| examples/localcowork/src-tauri/src/inference/client.rs | Extends test fixture with an LM Studio model and adds a targeted constructor test. |
| examples/localcowork/config.yaml | Introduces a top-level YAML config containing models + runtimes + fallback chain. |
| examples/localcowork/_models/config.yaml | Adds LM Studio model and runtimes.lmstudio configuration; also includes formatting/tidy updates. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| lmstudio: | ||
| # Note: LM Studio must be started manually from the app GUI. | ||
| # 1. Open LM Studio, load a model, click "Start Server" (headless mode). | ||
| # 2. Default port is 1234. Uses OpenAI-compatible API. | ||
| health_check: "http://localhost:1234/v1/models" | ||
| startup_timeout_seconds: 30 |
There was a problem hiding this comment.
The new runtimes.lmstudio section is currently ignored by the Rust loader because ModelsConfig has no runtimes field and Serde will drop unknown keys. If this is meant to drive runtime startup/health-check behavior, it should be added to ModelsConfig (and used) or the YAML should explicitly document that the runtimes map is informational-only so readers don’t assume it affects behavior.
There was a problem hiding this comment.
@copilot open a new pull request to apply changes based on this feedback
f776c6a to
3353bc9
Compare
- Add lmstudio runtime configuration in _models/config.yaml - Add example model config for LM Studio (port 1234) - Add unit tests for lmstudio model and runtime config - Verify inference client works with LM Studio endpoints Note: LM Studio must be started manually from the app GUI. The runtime config does not include command/args since there's no valid CLI for headless mode - users start it from the app.
3353bc9 to
8b34513
Compare
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Summary
Changes
Testing
Note
LM Studio must be started manually from the app GUI (no CLI for headless mode).