Skip to content

Commit 27d52d9

Browse files
committed
Add a model_server example podman-llm
This is a tool that was written to be as simple as ollama, in it's simplest form it's: podman-llm run granite Signed-off-by: Eric Curtin <[email protected]>
1 parent 241e0e4 commit 27d52d9

File tree

1 file changed

+89
-0
lines changed

1 file changed

+89
-0
lines changed

model_servers/podman-llm/README.md

Lines changed: 89 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,89 @@
1+
# podman-llm
2+
3+
The goal of podman-llm is to make AI even more boring.
4+
5+
## Install
6+
7+
Install podman-llm by running this one-liner:
8+
9+
```
10+
curl -fsSL https://raw.githubusercontent.com/ericcurtin/podman-llm/s/install.sh | sudo bash
11+
```
12+
13+
## Usage
14+
15+
### Running Models
16+
17+
You can run a model using the `run` command. This will start an interactive session where you can query the model.
18+
19+
```
20+
$ podman-llm run granite
21+
> Tell me about podman in less than ten words
22+
A fast, secure, and private container engine for modern applications.
23+
>
24+
```
25+
26+
### Serving Models
27+
28+
To serve a model via HTTP, use the `serve` command. This will start an HTTP server that listens for incoming requests to interact with the model.
29+
30+
```
31+
$ podman-llm serve granite
32+
...
33+
{"tid":"140477699799168","timestamp":1719579518,"level":"INFO","function":"main","line":3793,"msg":"HTTP server listening","n_threads_http":"11","port":"8080","hostname":"127.0.0.1"}
34+
...
35+
```
36+
37+
## Model library
38+
39+
| Model | Parameters | Run |
40+
| ------------------ | ---------- | ------------------------------ |
41+
| granite | 3B | `podman-llm run granite` |
42+
| mistral | 7B | `podman-llm run mistral` |
43+
| merlinite | 7B | `podman-llm run merlinite` |
44+
45+
## Containerfile Example
46+
47+
Here is an example Containerfile:
48+
49+
```
50+
FROM quay.io/podman-llm/podman-llm:41
51+
RUN llama-main --hf-repo ibm-granite/granite-3b-code-instruct-GGUF -m granite-3b-code-instruct.Q4_K_M.gguf
52+
LABEL MODEL=/granite-3b-code-instruct.Q4_K_M.gguf
53+
```
54+
55+
`LABEL MODEL` is important so we know where to find the .gguf file.
56+
57+
And we build via:
58+
59+
```
60+
podman-llm build granite
61+
```
62+
63+
## Diagram
64+
65+
```
66+
+------------------------+ +--------------------+ +------------------+
67+
| | | Pull runtime layer | | Pull model layer |
68+
| podman-llm run | -> | for llama.cpp | -> | with granite |
69+
| | | (CPU, Vulkan, AMD, | | |
70+
+------------------------+ | Nvidia, Intel or | |------------------|
71+
| Apple Silicon) | | Repo options: |
72+
+--------------------+ +------------------+
73+
| |
74+
v v
75+
+--------------+ +---------+
76+
| Hugging Face | | quay.io |
77+
+--------------+ +---------+
78+
\ /
79+
\ /
80+
\ /
81+
v v
82+
+-----------------+
83+
| Start container |
84+
| with llama.cpp |
85+
| and granite |
86+
| model |
87+
+-----------------+
88+
```
89+

0 commit comments

Comments
 (0)