Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid model error : too old, regenerate your model files! #361

Closed
strfic opened this issue Mar 21, 2023 · 14 comments
Closed

Invalid model error : too old, regenerate your model files! #361

strfic opened this issue Mar 21, 2023 · 14 comments
Labels
documentation Improvements or additions to documentation model Model specific

Comments

@strfic
Copy link

strfic commented Mar 21, 2023

Downloaded Alpaca 7B model successfully using the following command as mentioned in README.md:
curl -o ./models/ggml-alpaca-7b-q4.bin -C - https://gateway.estuary.tech/gw/ipfs/QmUp1UGeQFDqJKvtjbSYPBiZZKRjLp8shVP9hT8ZB9Ynv1

When I try to execute the command:
main -m ./models/ggml-alpaca-7b-q4.bin --color -f ./prompts/alpaca.txt -ins

This is the error output:
main: seed = 1679417098
llama_model_load: loading model from './models/ggml-alpaca-7b-q4.bin' - please wait ...
llama_model_load: invalid model file './models/ggml-alpaca-7b-q4.bin' (too old, regenerate your model files!)
main: failed to load model from './models/ggml-alpaca-7b-q4.bin'

How to fix this? Is the downloaded model corrupted and should I download it again? What is the SHA1 hash of the model so that I can verify that the downloaded model is corrupted or not?

@ggerganov
Copy link
Member

ggerganov commented Mar 21, 2023

Use the new links from the README which were updated like an hour or two ago

Edit: nvm - I see you are using them. I guess these are the old Alpaca models. You can convert them to the new format with a helper script. It's somewhere in the repo 😄

@gjmulder gjmulder added documentation Improvements or additions to documentation model Model specific labels Mar 21, 2023
@ggerganov
Copy link
Member

I think this script should help, but not sure: #324 (comment)

@edwios
Copy link

edwios commented Mar 21, 2023

Yes, the link @ggerganov gave above works. Just use the same tokenizer.model that comes with the LLaMA models.

Download the script mentioned in the link above, save it as, for example, convert.py at the same directory as the main, then just run:
python convert.py models/Alpaca/7B models/tokenizer.model (adjust the paths to the model directory and to the tokenizer as needed)

You will find a file called ggml-alpaca-7b-q4.bin.tmp in the same directory as your 7B model, move the original one somewhere and rename this one to ggml-alpaca-7b-q4.bin and you are good to go.

@strfic strfic closed this as completed Mar 22, 2023
@strfic
Copy link
Author

strfic commented Mar 22, 2023

Thanks @ggerganov and @edwios

@ThatCoffeeGuy
Copy link

I have the same issue, but in my case the script isn't working either:

python3 convert.py models/alpaca-7B-ggml/ models/tokenizer.model:
[...]
    raise Exception('Invalid file magic. Must be an old style ggml file.')
Exception: Invalid file magic. Must be an old style ggml file.

@jessejohnson
Copy link
Contributor

jessejohnson commented Mar 28, 2023

Same issue as @ThatCoffeeGuy has. My model is the old style. It doesn't work with binaries created after the breaking change.

Edit: This script worked on the 7B alpaca model I downloaded a week ago.

@sachinspanicker
Copy link

Does anyone have a converted Alpaca 13B model bin file that you could share so I could download it please?
I am getting the below error no matter what. I tried converting.

Response:

"
main: seed = 1681003839
llama_model_load: loading model from 'models/13B/ggml-model-q4_0.bin' - please wait ...
llama_model_load: invalid model file 'models/13B/ggml-model-q4_0.bin' (bad magic)
main: failed to load model from 'models/13B/ggml-model-q4_0.bin'
bash-3.2$ exit
exit

@sachinspanicker
Copy link

can someone share the converted alpaca 13b model bin file ?

@edwios
Copy link

edwios commented Apr 9, 2023

Use this script convert-unversioned-ggml-to-ggml.py that comes with the llama.cpp.

For example:

python convert-unversioned-ggml-to-ggml.py models/13B/oldggml/ggml-model-q4_0.bin models/13B/ggml-model-q4_0.bin

@sachinspanicker
Copy link

thank you

@sachinspanicker
Copy link

Use this script convert-unversioned-ggml-to-ggml.py that comes with the llama.cpp.

For example:

python convert-unversioned-ggml-to-ggml.py models/13B/oldggml/ggml-model-q4_0.bin models/13B/ggml-model-q4_0.bin

And just to confirm, the file that I should be converting is this right ? (https://huggingface.co/Pi3141/alpaca-native-13B-ggml/blob/main/ggml-model-q4_1.bin)

@edwios
Copy link

edwios commented Apr 9, 2023

Use this script convert-unversioned-ggml-to-ggml.py that comes with the llama.cpp.
For example:
python convert-unversioned-ggml-to-ggml.py models/13B/oldggml/ggml-model-q4_0.bin models/13B/ggml-model-q4_0.bin

And just to confirm, the file that I should be converting is this right ? (https://huggingface.co/Pi3141/alpaca-native-13B-ggml/blob/main/ggml-model-q4_1.bin)

That I don't know. I have had my models converted from their .pth files in the beginning. But if you've got the message telling you the file magic is wrong, then likely the above script would help.

@sachinspanicker
Copy link

Use this script convert-unversioned-ggml-to-ggml.py that comes with the llama.cpp.
For example:
python convert-unversioned-ggml-to-ggml.py models/13B/oldggml/ggml-model-q4_0.bin models/13B/ggml-model-q4_0.bin

And just to confirm, the file that I should be converting is this right ? (https://huggingface.co/Pi3141/alpaca-native-13B-ggml/blob/main/ggml-model-q4_1.bin)

That I don't know. I have had my models converted from their .pth files in the beginning. But if you've got the message telling you the file magic is wrong, then likely the above script would help.

Hi, am just not able to do the convert. Keep getting an error as below

cduser@CDPL17-QA:~/dalai/llama$ python3 convert-pth-to-ggml.py models/13B/ggml-model-q4_1.bin models/13B/ggml-model-q4_0.bin
usage: convert-pth-to-ggml.py [-h] dir_model {0,1} [vocab_only]
convert-pth-to-ggml.py: error: argument ftype: invalid int value: 'models/13B/ggml-model-q4_0.bin'

I even tried what is mentioned in the instructions at https://github.com/ggerganov/llama.cpp

Use this script convert-unversioned-ggml-to-ggml.py that comes with the llama.cpp.
For example:
python convert-unversioned-ggml-to-ggml.py models/13B/oldggml/ggml-model-q4_0.bin models/13B/ggml-model-q4_0.bin

And just to confirm, the file that I should be converting is this right ? (https://huggingface.co/Pi3141/alpaca-native-13B-ggml/blob/main/ggml-model-q4_1.bin)

That I don't know. I have had my models converted from their .pth files in the beginning. But if you've got the message telling you the file magic is wrong, then likely the above script would help.

Hi, I only have this script in llama.cpp after installation - convert-pth-to-ggml.py
Where can I get the convert-unversioned-ggml-to-ggml.py ?

@sachinspanicker
Copy link

Use this script convert-unversioned-ggml-to-ggml.py that comes with the llama.cpp.
For example:
python convert-unversioned-ggml-to-ggml.py models/13B/oldggml/ggml-model-q4_0.bin models/13B/ggml-model-q4_0.bin

And just to confirm, the file that I should be converting is this right ? (https://huggingface.co/Pi3141/alpaca-native-13B-ggml/blob/main/ggml-model-q4_1.bin)

That I don't know. I have had my models converted from their .pth files in the beginning. But if you've got the message telling you the file magic is wrong, then likely the above script would help.

I was able to locate the script. Stumbled upon another roadblock. Trying to solve it. Any help appreciated.

:~/llama.cpp$ python3 convert-unversioned-ggml-to-ggml.py ../ggml-model-q4_1.bin ../dalai/alpaca/models/13B/ggml-model-q4_1.bin
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation model Model specific
Projects
None yet
Development

No branches or pull requests

7 participants