Fine-Tuning with Unsloth does not create any Modelfile #2653
Unanswered
dave-espinosa
asked this question in
Q&A
Replies: 1 comment
-
|
I have been using this _ollama_modelfile method to generate a modelfile, while the model and tokenizer are loaded. print(tokenizer._ollama_modelfile) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone. I am following the Tutorial: How to Finetune Llama-3 and Use In Ollama, but when running the comand:
The
Modelfileis never created:Same thing happens when this command is run:
Which gets only:
I have seen that the tutorial mentioned above includes this workaround already, so I am not sure what else it could be.
What could be going on here?
BTW, I am running that notebook from a GCP Vertex AI Jupyter Notebook.
Beta Was this translation helpful? Give feedback.
All reactions