-
Notifications
You must be signed in to change notification settings - Fork 10.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Invalid model error : too old, regenerate your model files! #361
Comments
Use the new links from the README which were updated like an hour or two ago Edit: nvm - I see you are using them. I guess these are the old Alpaca models. You can convert them to the new format with a helper script. It's somewhere in the repo 😄 |
I think this script should help, but not sure: #324 (comment) |
Yes, the link @ggerganov gave above works. Just use the same Download the script mentioned in the link above, save it as, for example, You will find a file called |
Thanks @ggerganov and @edwios |
I have the same issue, but in my case the script isn't working either:
|
Same issue as @ThatCoffeeGuy has. My model is the old style. It doesn't work with binaries created after the breaking change. Edit: This script worked on the 7B alpaca model I downloaded a week ago. |
Does anyone have a converted Alpaca 13B model bin file that you could share so I could download it please?
|
can someone share the converted alpaca 13b model bin file ? |
Use this script For example:
|
thank you |
And just to confirm, the file that I should be converting is this right ? (https://huggingface.co/Pi3141/alpaca-native-13B-ggml/blob/main/ggml-model-q4_1.bin) |
That I don't know. I have had my models converted from their |
Hi, am just not able to do the convert. Keep getting an error as below cduser@CDPL17-QA:~/dalai/llama$ python3 convert-pth-to-ggml.py models/13B/ggml-model-q4_1.bin models/13B/ggml-model-q4_0.bin I even tried what is mentioned in the instructions at https://github.com/ggerganov/llama.cpp
Hi, I only have this script in llama.cpp after installation - convert-pth-to-ggml.py |
I was able to locate the script. Stumbled upon another roadblock. Trying to solve it. Any help appreciated. :~/llama.cpp$ python3 convert-unversioned-ggml-to-ggml.py ../ggml-model-q4_1.bin ../dalai/alpaca/models/13B/ggml-model-q4_1.bin |
Downloaded Alpaca 7B model successfully using the following command as mentioned in README.md:
curl -o ./models/ggml-alpaca-7b-q4.bin -C - https://gateway.estuary.tech/gw/ipfs/QmUp1UGeQFDqJKvtjbSYPBiZZKRjLp8shVP9hT8ZB9Ynv1
When I try to execute the command:
main -m ./models/ggml-alpaca-7b-q4.bin --color -f ./prompts/alpaca.txt -ins
This is the error output:
main: seed = 1679417098
llama_model_load: loading model from './models/ggml-alpaca-7b-q4.bin' - please wait ...
llama_model_load: invalid model file './models/ggml-alpaca-7b-q4.bin' (too old, regenerate your model files!)
main: failed to load model from './models/ggml-alpaca-7b-q4.bin'
How to fix this? Is the downloaded model corrupted and should I download it again? What is the SHA1 hash of the model so that I can verify that the downloaded model is corrupted or not?
The text was updated successfully, but these errors were encountered: