5
5
</div >
6
6
7
7
<h1 align =" center " >
8
- Fully-featured & beautiful web interface for vLLM
8
+ Fully-featured & beautiful web interface for vLLM & Ollama
9
9
</h1 >
10
10
11
11
Get up and running with Large Language Models ** quickly** , ** locally** and even ** offline** .
@@ -31,7 +31,7 @@ https://github.com/jakobhoeg/nextjs-ollama-llm-ui/assets/114422072/08eaed4f-9deb
31
31
To use the web interface, these requisites must be met:
32
32
33
33
1 . Download [ vLLM] ( https://docs.vllm.ai/en/latest/ ) and have it running. Or run it in a Docker container.
34
- 2 . Node.js (18+) and npm is required. [ Download ] ( https://nodejs.org/ en/download )
34
+ 2 . [ Node.js] ( https://nodejs.org/en/download ) (18+), [ npm ] ( https://docs.npmjs.com/downloading- and-installing-node-js-and- npm) and [ yarn ] ( https://classic.yarnpkg.com/lang/ en/docs/install/#mac-stable ) is required.
35
35
36
36
# Usage 🚀
37
37
@@ -46,50 +46,63 @@ If you're using Ollama, you need to set the `VLLM_MODEL`:
46
46
docker run --rm -d -p 3000:3000 -e VLLM_URL=http://host.docker.internal:11434 -e VLLM_TOKEN_LIMIT=8192 -e VLLM_MODEL=llama3 ghcr.io/yoziru/nextjs-vllm-ui:latest
47
47
```
48
48
49
- Then go to [ localhost:3000] ( http://localhost:3000 ) and start chatting with your favourite model!
50
-
51
- # Development 📖
52
-
53
- To install and run a local environment of the web interface, follow the instructions below.
54
-
55
- ** 1. Clone the repository to a directory on your pc via command prompt:**
56
-
57
- ```
58
- git clone https://github.com/jakobhoeg/nextjs-ollama-llm-ui
59
- ```
60
-
61
- ** 2. Open the folder:**
62
-
63
- ```
64
- cd nextjs-ollama-llm-ui
65
- ```
66
-
67
- ** 3. Rename the ` .example.env ` to ` .env ` :**
68
-
49
+ If your server is running on a different IP address or port, you can use the ` --network host ` mode in Docker, e.g.:
69
50
```
70
- mv .example.env .env
51
+ docker run --rm -d --network host -e VLLM_URL=http://192.1.0.110:11434 -e VLLM_TOKEN_LIMIT=8192 -e VLLM_MODEL=llama3 ghcr.io/yoziru/nextjs-vllm-ui:latest
71
52
```
72
53
73
- ** 4. If your instance of vLLM is NOT running on the default ip-address and port, change the variable in the .env file to fit your usecase:**
74
-
75
- ```
76
- VLLM_URL="http://localhost:8000"
77
- VLLM_API_KEY="your-api-key"
78
- ```
79
-
80
- ** 5. Install dependencies:**
54
+ Then go to [ localhost:3000] ( http://localhost:3000 ) and start chatting with your favourite model!
81
55
82
- ```
83
- npm install
84
- ```
56
+ # Development 📖
85
57
86
- ** 6. Start the development server: **
58
+ To install and run a local environment of the web interface, follow the instructions below.
87
59
60
+ 1 . ** Clone the repository to a directory on your pc via command prompt:**
61
+ ```
62
+ git clone https://github.com/yoziru/nextjs-vllm-ui
63
+ ```
64
+
65
+ 1. **Open the folder:**
66
+ ```
67
+ cd nextjs-vllm-ui
68
+ ```
69
+
70
+ 1. ** Rename the `.example.env` to `.env`:**
71
+ ```
72
+ mv .example.env .env
73
+ ```
74
+
75
+ 1. **If your instance of vLLM is NOT running on the default ip-address and port, change the variable in the .env file to fit your usecase:**
76
+ ```
77
+ VLLM_URL="http://localhost:8000"
78
+ VLLM_API_KEY="your-api-key"
79
+ VLLM_MODEL="llama3:8b"
80
+ VLLM_TOKEN_LIMIT=4096
81
+ ```
82
+
83
+ 1. **Install dependencies:**
84
+ ```
85
+ yarn install
86
+ ```
87
+
88
+ 1. **Start the development server:**
89
+ ```
90
+ yarn dev
91
+ ```
92
+
93
+ 1. **Go to [localhost:3000](http://localhost:3000) and start chatting with your favourite model!**
94
+
95
+
96
+ You can also build and run the docker image locally with this command:
97
+ ```sh
98
+ docker build . -t ghcr.io/yoziru/nextjs-vllm-ui:latest \
99
+ && docker run --rm \
100
+ -p 3000:3000 \
101
+ -e VLLM_URL=http://host.docker.internal:11434 \
102
+ -e VLLM_MODEL=llama3.1:8b-instruct-q8_0 \
103
+ -e NEXT_PUBLIC_TOKEN_LIMIT="8192" \
104
+ ghcr.io/yoziru/nextjs-vllm-ui:latest
88
105
```
89
- npm run dev
90
- ```
91
-
92
- ** 5. Go to [ localhost:3000] ( http://localhost:3000 ) and start chatting with your favourite model!**
93
106
94
107
# Tech stack
95
108
0 commit comments