Skip to content

fix: improve LM Studio connection handling and documentation#67

Open
vlordier wants to merge 6 commits intoLiquid4All:mainfrom
vlordier:fix/lmstudio-error-handling
Open

fix: improve LM Studio connection handling and documentation#67
vlordier wants to merge 6 commits intoLiquid4All:mainfrom
vlordier:fix/lmstudio-error-handling

Conversation

@vlordier
Copy link
Copy Markdown

@vlordier vlordier commented Mar 6, 2026

Summary

Improves on PR #53 (LM Studio support):

  1. Better error handling - Adds debug message when LM Studio port 1234 connection fails
  2. Documentation - Documents LM Studio default endpoint in .env.example

Changes

  • inference/client.rs: Add helpful debug message for LM Studio connection issues
  • .env.example: Document LM Studio default port (1234)

Testing

  • Clippy clean

vlordier added 6 commits March 6, 2026 21:46
- Tool Result Compression: smart summarization for large tool outputs
  (directory listings, search results, JSON data)
- Request Deduplication: 500ms debounce on send button to prevent
  duplicate requests from rapid clicks
- Config Hot Reload: poll config file for changes, reload without
  restart, show toast notification
- Error Boundary: timeout wrapper (120s) around tool execution,
  graceful error messages instead of crashing agent loop
- Add 13 tests for settings.rs (AppSettings, SamplingConfig, config hot reload)
- Add 12 tests for chat.rs compression functions (truncate, compress directory, search, JSON)
- Total tests: 392 (up from 365)
- Coverage: 42.5% (up from 39.5%)
- Add ModelStatus tests (serialization, healthy/unhealthy states)
- Add SamplingOverrides serialization tests
- Add InferenceClient tests for:
  - LM Studio URL construction and model selection
  - Tool call format (NativeJson vs Pythonic)
  - Fallback chain exhaustion (AllModelsUnavailable)
  - is_retriable for various HTTP status codes
  - Error repair from malformed tool calls
- Total tests: 417 (up from 392)
- Add tests for data_dir(), cache_dir(), resolve_db_path()
- Add rotate_log_file() tests (creates rotated copies, handles missing files)
- Add filter_by_enabled_servers() tests (filters correctly, handles missing/invalid config)
- Add resolve_vision_model() tests (returns None without config)
- Add filter_tools_by_allowlist() test (doesn't panic without config)
- Add load_override_file() tests (parses config, returns empty for missing)
- Total tests: 430 (up from 417)
- Coverage: 43.3% (up from 42.5%)
- Add helpful debug message when LM Studio port connection fails
- Distinguish between connection errors and timeouts in health_check
- Document default LM Studio port (1234) in environment configuration
Copilot AI review requested due to automatic review settings March 6, 2026 22:41
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This is a large PR that adds the full LocalCowork application — a Tauri-based desktop app with an on-device AI agent, MCP server management, inference client, and React frontend. The PR description mentions improving LM Studio connection handling and documentation, but the actual changes are far broader, introducing the entire application codebase.

Changes:

  • Added the complete Tauri backend including MCP client, inference engine, agent core (conversation management, tool routing, permissions, orchestration), and IPC commands
  • Added React frontend components (settings store, chat input, app shell) and Tauri configuration
  • Added LM Studio endpoint documentation to .env.example

Reviewed changes

Copilot reviewed 49 out of 56 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
src/stores/settingsStore.ts Zustand store for settings panel state and actions
src/components/Chat/MessageInput.tsx Chat message input with debouncing
src/App.tsx Root app component with config watching and layout
src-tauri/tauri.conf.json Tauri window and bundle configuration
src-tauri/src/mcp_client/* MCP client: types, transport, lifecycle, discovery, registry, errors
src-tauri/src/inference/* Inference client: config, streaming, types, tool call parsing, errors
src-tauri/src/agent_core/* Agent core: conversation, tool router, permissions, orchestrator, tokens
src-tauri/src/commands/* Tauri IPC commands for chat, filesystem, hardware, model download, etc.
src-tauri/Cargo.toml Rust dependencies and build configuration
examples/localcowork/.env.example Added LM Studio default endpoint documentation
Other config files Entitlements, capabilities, build scripts

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +142 to +143
if (interval) {
clearInterval(interval);
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

stopConfigWatch destructures configWatchInterval from state but then references an undefined variable interval on line 142-143. This will throw a ReferenceError at runtime when attempting to stop the config watch. The variable name should be configWatchInterval instead of interval.

Suggested change
if (interval) {
clearInterval(interval);
if (configWatchInterval) {
clearInterval(configWatchInterval);

Copilot uses AI. Check for mistakes.
// fs was overridden
assert_eq!(merged["fs"].command, "node");
// ocr was NOT overridden — preserved from discovery
assert_eq!(merged["ocr"].command, "npx");
Copy link

Copilot AI Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assertion will fail on Windows because default_npx_command() returns "npx.cmd" on Windows, not "npx". The test should compare against default_npx_command() (as done in test_discover_ts_server) instead of the hard-coded string "npx".

Suggested change
assert_eq!(merged["ocr"].command, "npx");
assert_eq!(merged["ocr"].command, super::default_npx_command());

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants