Skip to content

Commit ed0e333

Browse files
committed
update README
1 parent c7a0120 commit ed0e333

File tree

1 file changed

+52
-39
lines changed

1 file changed

+52
-39
lines changed

README.md

Lines changed: 52 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
</div>
66

77
<h1 align="center">
8-
Fully-featured & beautiful web interface for vLLM
8+
Fully-featured & beautiful web interface for vLLM & Ollama
99
</h1>
1010

1111
Get up and running with Large Language Models **quickly**, **locally** and even **offline**.
@@ -31,7 +31,7 @@ https://github.com/jakobhoeg/nextjs-ollama-llm-ui/assets/114422072/08eaed4f-9deb
3131
To use the web interface, these requisites must be met:
3232

3333
1. Download [vLLM](https://docs.vllm.ai/en/latest/) and have it running. Or run it in a Docker container.
34-
2. Node.js (18+) and npm is required. [Download](https://nodejs.org/en/download)
34+
2. [Node.js](https://nodejs.org/en/download) (18+), [npm](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) and [yarn](https://classic.yarnpkg.com/lang/en/docs/install/#mac-stable) is required.
3535

3636
# Usage 🚀
3737

@@ -46,50 +46,63 @@ If you're using Ollama, you need to set the `VLLM_MODEL`:
4646
docker run --rm -d -p 3000:3000 -e VLLM_URL=http://host.docker.internal:11434 -e VLLM_TOKEN_LIMIT=8192 -e VLLM_MODEL=llama3 ghcr.io/yoziru/nextjs-vllm-ui:latest
4747
```
4848

49-
Then go to [localhost:3000](http://localhost:3000) and start chatting with your favourite model!
50-
51-
# Development 📖
52-
53-
To install and run a local environment of the web interface, follow the instructions below.
54-
55-
**1. Clone the repository to a directory on your pc via command prompt:**
56-
57-
```
58-
git clone https://github.com/jakobhoeg/nextjs-ollama-llm-ui
59-
```
60-
61-
**2. Open the folder:**
62-
63-
```
64-
cd nextjs-ollama-llm-ui
65-
```
66-
67-
**3. Rename the `.example.env` to `.env`:**
68-
49+
If your server is running on a different IP address or port, you can use the `--network host` mode in Docker, e.g.:
6950
```
70-
mv .example.env .env
51+
docker run --rm -d --network host -e VLLM_URL=http://192.1.0.110:11434 -e VLLM_TOKEN_LIMIT=8192 -e VLLM_MODEL=llama3 ghcr.io/yoziru/nextjs-vllm-ui:latest
7152
```
7253

73-
**4. If your instance of vLLM is NOT running on the default ip-address and port, change the variable in the .env file to fit your usecase:**
74-
75-
```
76-
VLLM_URL="http://localhost:8000"
77-
VLLM_API_KEY="your-api-key"
78-
```
79-
80-
**5. Install dependencies:**
54+
Then go to [localhost:3000](http://localhost:3000) and start chatting with your favourite model!
8155

82-
```
83-
npm install
84-
```
56+
# Development 📖
8557

86-
**6. Start the development server:**
58+
To install and run a local environment of the web interface, follow the instructions below.
8759

60+
1. **Clone the repository to a directory on your pc via command prompt:**
61+
```
62+
git clone https://github.com/yoziru/nextjs-vllm-ui
63+
```
64+
65+
1. **Open the folder:**
66+
```
67+
cd nextjs-vllm-ui
68+
```
69+
70+
1. ** Rename the `.example.env` to `.env`:**
71+
```
72+
mv .example.env .env
73+
```
74+
75+
1. **If your instance of vLLM is NOT running on the default ip-address and port, change the variable in the .env file to fit your usecase:**
76+
```
77+
VLLM_URL="http://localhost:8000"
78+
VLLM_API_KEY="your-api-key"
79+
VLLM_MODEL="llama3:8b"
80+
VLLM_TOKEN_LIMIT=4096
81+
```
82+
83+
1. **Install dependencies:**
84+
```
85+
yarn install
86+
```
87+
88+
1. **Start the development server:**
89+
```
90+
yarn dev
91+
```
92+
93+
1. **Go to [localhost:3000](http://localhost:3000) and start chatting with your favourite model!**
94+
95+
96+
You can also build and run the docker image locally with this command:
97+
```sh
98+
docker build . -t ghcr.io/yoziru/nextjs-vllm-ui:latest \
99+
&& docker run --rm \
100+
-p 3000:3000 \
101+
-e VLLM_URL=http://host.docker.internal:11434 \
102+
-e VLLM_MODEL=llama3.1:8b-instruct-q8_0 \
103+
-e NEXT_PUBLIC_TOKEN_LIMIT="8192" \
104+
ghcr.io/yoziru/nextjs-vllm-ui:latest
88105
```
89-
npm run dev
90-
```
91-
92-
**5. Go to [localhost:3000](http://localhost:3000) and start chatting with your favourite model!**
93106

94107
# Tech stack
95108

0 commit comments

Comments
 (0)