Skip to content

Commit 8ee7b07

Browse files
authored
Update openpipe-loras.mdx
1 parent ed72dc1 commit 8ee7b07

File tree

1 file changed

+1
-8
lines changed

1 file changed

+1
-8
lines changed

fern/docs/text-gen-solution/openpipe-loras.mdx

Lines changed: 1 addition & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,6 @@ For more information about what a LoRA is, we recommend [this HuggingFace guide]
1616
This guide supports LoRAs for the following models:
1717

1818
- Llama-3-8B (32K token context)
19-
- Mistral-7B Optimized (32K token context)
2019

2120
We don't yet support hosting for Llama-3-70B-Instruct and Mixtral-8x7B, but that's coming soon!
2221

@@ -82,7 +81,6 @@ octoai login
8281
Below, uncomment which base model, checkpoint, and LoRA URL you want to use. As noted above, we support:
8382

8483
- Llama-3-8B (32K token context)
85-
- Mistral-7B Optimized (32K token context)
8684

8785
For this demo, we'll go with the Llama-3-8B 32k context model. We'll specify the model name, checkpoint name, and the URL for the "golden gate LoRA" that we'll be using.
8886

@@ -95,11 +93,6 @@ export GOLDEN_GATE_LORA_URL="https://s3.amazonaws.com/downloads.octoai.cloud/lor
9593
export MODEL_NAME="openpipe-llama-3-8b-32k" #A beta 32K llama-3 endpoint
9694
export CHECKPOINT_NAME="octoai:openpipe-llama-3-8b-32k"
9795

98-
# # Mistral-7B Optimized (32K token context)
99-
# export GOLDEN_GATE_LORA_URL="https://s3.amazonaws.com/downloads.octoai.cloud/loras/text/golden_lora_mistral-7b.zip"
100-
# export MODEL_NAME="openpipe-mistral-7b" #An optimized Mistral-7B endpoint
101-
# export CHECKPOINT_NAME="octoai:openpipe-mistral-7b"
102-
10396
#set LoRA name:
10497
export LORA_NAME="my_great_lora"
10598
```
@@ -108,7 +101,7 @@ export LORA_NAME="my_great_lora"
108101

109102
Now, let's upload and use a LoRA to alter the behavior of the model! Below, we upload the LoRA and its associated config files.
110103

111-
We need to specify what base checkpoint and architecture ("engine") the model corresponds to. **Change the "engine" to mistral-7b if you want to use that model.**
104+
We need to specify what base checkpoint and architecture ("engine") the model corresponds to.
112105

113106
The command below uses `--upload-from-url` which lets you upload these files from the OpenPipe download URL. Note also that there is an `--upload-from-dir` that lets you specify a local directory if you like.
114107

0 commit comments

Comments
 (0)