You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: fern/docs/text-gen-solution/openpipe-loras.mdx
+1-8Lines changed: 1 addition & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,6 @@ For more information about what a LoRA is, we recommend [this HuggingFace guide]
16
16
This guide supports LoRAs for the following models:
17
17
18
18
- Llama-3-8B (32K token context)
19
-
- Mistral-7B Optimized (32K token context)
20
19
21
20
We don't yet support hosting for Llama-3-70B-Instruct and Mixtral-8x7B, but that's coming soon!
22
21
@@ -82,7 +81,6 @@ octoai login
82
81
Below, uncomment which base model, checkpoint, and LoRA URL you want to use. As noted above, we support:
83
82
84
83
- Llama-3-8B (32K token context)
85
-
- Mistral-7B Optimized (32K token context)
86
84
87
85
For this demo, we'll go with the Llama-3-8B 32k context model. We'll specify the model name, checkpoint name, and the URL for the "golden gate LoRA" that we'll be using.
Now, let's upload and use a LoRA to alter the behavior of the model! Below, we upload the LoRA and its associated config files.
110
103
111
-
We need to specify what base checkpoint and architecture ("engine") the model corresponds to. **Change the "engine" to mistral-7b if you want to use that model.**
104
+
We need to specify what base checkpoint and architecture ("engine") the model corresponds to.
112
105
113
106
The command below uses `--upload-from-url` which lets you upload these files from the OpenPipe download URL. Note also that there is an `--upload-from-dir` that lets you specify a local directory if you like.
0 commit comments