You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* docs: update README and .env.example for Ollama model support
* docs: update name for uniformity
* docs: update README to include instructions for downloading model weights
* chore: update .env.example
---------
Signed-off-by: Palaniappan R <[email protected]>
Copy file name to clipboardExpand all lines: backend/README.md
+15Lines changed: 15 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,6 +46,21 @@ This key is used to access the various google cloud functions.
46
46
47
47
**NOTE**: The user might need billing to be set up on google cloud account and make sure to name the file as `credentials.json` as this would be ignored by `.git` and wouldn't be exposed on Github
48
48
49
+
### Running ORAssistant with a Local Ollama Model
50
+
51
+
ORAssistant supports running locally hosted Ollama models for inference. Follow these steps to set it up:
52
+
53
+
#### 1. Install Ollama
54
+
- Visit [Ollama's installation page](https://ollama.com/download) and follow the installation instructions for your system.
55
+
56
+
#### 2. Configure ORAssistant to Use Ollama
57
+
- In your `.env` file, set:
58
+
```bash
59
+
LLM_MODEL="ollama"
60
+
OLLAMA_MODEL="<model_name>"
61
+
62
+
Ensure Ollama is running locally before starting ORAssistant. Make sure the model weights are available by downloading them first with `ollama pull <model_name>`.
0 commit comments