Skip to content

Commit e6201f3

Browse files
authored
Update build.md
1 parent c496612 commit e6201f3

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

docs/build.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -604,18 +604,18 @@ Follow the instructions below to install OpenVINO runtime and build llama.cpp wi
604604

605605
- Linux or Windows system with Intel hardware (CPU, GPU, or NPU)
606606
- **For Intel GPU or NPU Usage**: Install the appropriate hardware drivers for your Intel GPU or NPU. For detailed instructions, see: [Additional Configurations for Hardware Acceleration](https://docs.openvino.ai/2025/get-started/install-openvino/configurations.html).
607-
- Git, CMake, and Ninja software tools are needed for building
607+
- Git, CMake, and Ninja software tools are needed for building.
608608
```bash
609609
sudo apt-get update
610610
sudo apt-get install -y build-essential libcurl4-openssl-dev libtbb12 cmake ninja-build python3-pip curl wget tar
611611
```
612612

613613
### 1. Install OpenVINO Runtime
614614

615-
- Follow the guide to install OpenVINO Runtime from an archive file: **[Install OpenVINO™ Runtime on Linux from an Archive File.](https://docs.openvino.ai/2025/get-started/install-openvino/install-openvino-archive-linux.html)**
615+
- Follow the guide to install OpenVINO Runtime from an archive file: [Linux](https://docs.openvino.ai/2025/get-started/install-openvino/install-openvino-archive-linux.html) | [Windows](https://docs.openvino.ai/2025/get-started/install-openvino/install-openvino-archive-windows.html)
616616

617617
<details>
618-
<summary>📦 Click to expand OpenVINO 2025.2 installation commands</summary>
618+
<summary>📦 Click to expand OpenVINO 2025.2 installation commands on Linux</summary>
619619
<br>
620620

621621
```bash
@@ -689,7 +689,6 @@ export GGML_OPENVINO_DEVICE=GPU
689689
To run in chat mode:
690690
```bash
691691
export GGML_OPENVINO_CACHE_DIR=/tmp/ov_cache
692-
693692
./build/ReleaseOV/bin/llama-cli -m ~/models/Llama-3.2-1B-Instruct.fp16.gguf -n 50 "The story of AI is "
694693

695694
```
@@ -715,6 +714,7 @@ export GGML_OPENVINO_PROFILING=1
715714

716715
./build/ReleaseOV/bin/llama-simple -m ~/models/Llama-3.2-1B-Instruct.fp16.gguf -n 50 "The story of AI is "
717716
```
717+
> **Note:** To apply your code changes, clear the `GGML_OPENVINO_CACHE_DIR` directory and rebuild the project.
718718
719719
### Using Llama.cpp's Built-in CPU Backend (for Comparison)
720720

0 commit comments

Comments
 (0)