Replies: 2 comments
-
|
closing to replace with a different question |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
closing to replace with a different question |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I used the Gemma3_(270M).ipynb notebook to fine tune the Gemma3 model. I then imported the fine tuning and gguf into Ollama. Now, I cannot figure out how to ask Ollama how to provide the final chess move. Actually, no matter what I pass into Ollama with this new model, it gets stuck in an infinite loop. Is that a sign that I messed up during the tuning or should that be expected with this type of fine tuning?
https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma3_(270M).ipynb
I took steps from the Llama3_(8B)-Ollama.ipynb notebook to figure out how to export my fine tuning from Gemma3_(270M).ipynb into Ollama:
https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3_(8B)-Ollama.ipynb
Beta Was this translation helpful? Give feedback.
All reactions