Skip to content

Commit 79214fb

Browse files
cryptaxcryptax
andauthored
Fix to support lm-studio servers with openai style api and baseurl (#196)
Co-authored-by: cryptax <axelle@alligator>
1 parent 775d324 commit 79214fb

File tree

2 files changed

+21
-2
lines changed

2 files changed

+21
-2
lines changed

src/README.md

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ These can be set with `r2ai -e <keyname>=<value>`
6363
|------------------|------------------------------------------------------------------------------------------------|
6464
| r2ai.api | Name of the provider e.g `openai`. List possibilities with `r2ai -e r2ai.api=?` |
6565
| r2ai.model | Model name. List possibilities with `r2ai -e r2ai.model=?` |
66-
| r2ai.baseurl | Remote LLM HTTP server e.g http://127.0.0.1:11434. |
66+
| r2ai.baseurl | Remote LLM base URL. Specify host if necessary e.g http://127.0.0.1:11434. |
6767
| r2ai.max_tokens | Maximum output tokens or maximum total tokens. Check the appropriate limits for your model |
6868
| r2ai.temperature | How creative the model should be. 0=not creative, 1=very creative |
6969
| r2ai.cmds | R2 command to issue and send output in context to model |
@@ -256,3 +256,16 @@ r2ai_messages_add_tool_call(msgs, "r2cmd", "{\"command\":\"pdf@main\"}", "tool-1
256256
// Free all resources when done
257257
r2ai_messages_free(msgs);
258258
```
259+
## lm-studio
260+
261+
Install [LM Studio](https://lmstudio.ai/). On a server, LM Studio must be run as a normal user (not root) + install *FUSE*.
262+
263+
Then, download your preferred model(s). For example to install GPT-OSS, follow this [cookbook](https://cookbook.openai.com/articles/gpt-oss/run-locally-lmstudio).
264+
265+
When you launch LM Studio, go to developer options, Settings, and select server **port** (1234 by default) and check whether you need "serve on a local network" or not (accessible on localhost only, or on a local network).
266+
267+
In r2ai, as lm-studio uses an OpenAI-like API, configure:
268+
269+
- `r2ai -e api=openai`
270+
- `r2ai -e baseurl=http://LM-STUDIO-IP:PORT`
271+
- `r2ai -e model=?` to list available models

src/r2ai.c

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -768,7 +768,13 @@ static void cmd_r2ai(RCore *core, const char *input) {
768768

769769
R_IPI const char *r2ai_get_provider_url(RCore *core, const char *provider) {
770770
if (strcmp (provider, "openai") == 0) {
771-
return "https://api.openai.com/v1";
771+
const char *host = r_config_get (core->config, "r2ai.baseurl");
772+
if (R_STR_ISNOTEMPTY (host)) {
773+
if (r_str_startswith (host, "http")) {
774+
return r_str_newf ("%s/v1", host);
775+
}
776+
return r_str_newf ("http://%s/v1", host);
777+
} else return "https://api.openai.com/v1";
772778
} else if (strcmp (provider, "gemini") == 0) {
773779
return "https://generativelanguage.googleapis.com/v1beta/openai";
774780
} else if (strcmp (provider, "ollama") == 0) {

0 commit comments

Comments
 (0)