Commit 1adc981
authored
fix(nix): remove non-functional llama-cpp cachix cache from flake.nix (ggml-org#15295)
The flake.nix included references to llama-cpp.cachix.org cache with a comment
claiming it's 'Populated by the CI in ggml-org/llama.cpp', but:
1. No visible CI workflow populates this cache
2. The cache is empty for recent builds (tested b6150, etc.)
3. This misleads users into expecting pre-built binaries that don't exist
This change removes the non-functional cache references entirely, leaving only
the working cuda-maintainers cache that actually provides CUDA dependencies.
Users can still manually add the llama-cpp cache if it becomes functional in the future.1 parent b3e1666 commit 1adc981
1 file changed
+0
-5
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
36 | 36 | | |
37 | 37 | | |
38 | 38 | | |
39 | | - | |
40 | | - | |
41 | | - | |
42 | 39 | | |
43 | 40 | | |
44 | 41 | | |
| |||
47 | 44 | | |
48 | 45 | | |
49 | 46 | | |
50 | | - | |
51 | 47 | | |
52 | 48 | | |
53 | | - | |
54 | 49 | | |
55 | 50 | | |
56 | 51 | | |
| |||
0 commit comments