Skip to content

Commit 341e058

Browse files
authored
Merge pull request #12 from GreenBitAI/feature/gptq_model
update AutoGPTQ Q-SFT command
2 parents f05f1a4 + 9bdc18f commit 341e058

File tree

1 file changed

+6
-0
lines changed

1 file changed

+6
-0
lines changed

README.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -131,12 +131,18 @@ The '--tune-qweight-only' parameter determines whether to fine-tune only the qua
131131

132132
```bash
133133
CUDA_VISIBLE_DEVICES=0 python -m green_bit_llm.sft.finetune --model GreenBitAI/Qwen-1.5-1.8B-layer-mix-bpw-3.0 --dataset tatsu-lab/alpaca --optimizer DiodeMix --tune-qweight-only
134+
135+
# AutoGPTQ model Q-SFT
136+
CUDA_VISIBLE_DEVICES=0 python -m green_bit_llm.sft.finetune --model astronomer/Llama-3-8B-Instruct-GPTQ-4-Bit --dataset tatsu-lab/alpaca --tune-qweight-only --batch-size 1
134137
```
135138

136139
### Parameter efficient fine-tuning
137140

138141
```bash
139142
CUDA_VISIBLE_DEVICES=0 python -m green_bit_llm.sft.peft_lora --model GreenBitAI/Qwen-1.5-1.8B-layer-mix-bpw-3.0 --dataset tatsu-lab/alpaca --lr-fp 1e-6
143+
144+
# AutoGPTQ model with Lora
145+
CUDA_VISIBLE_DEVICES=0 python -m green_bit_llm.sft.peft_lora --model astronomer/Llama-3-8B-Instruct-GPTQ-4-Bit --dataset tatsu-lab/alpaca --lr-fp 1e-6
140146
```
141147

142148
## License

0 commit comments

Comments
 (0)