-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot read properties of undefined (reading 'split') #164
Comments
you probably don't have the models installed. What does it print when you just start the webui. It should show the model installed path and folders.
|
Having the same error here. Will take a look at that and share any findings @rupakhetibinit |
maybe it's obvious but it took me a bit: If you installed the model to a custom path with you also have to serve it from that custom path |
I have the same issue. I have installed the model to a custom path and serving it from the same path too. Logs clearly show it has detected the models, but getting the same error
|
Same problem without using a custom path. The model is in the mentioned folder.
|
which version are you guys using? what does |
I was already on 0.3.1, but I cleared the cache and reinstalled as you said but the issue still persists. |
Check one thing for me. Does the directory dalai/alpaca/build/Release contain 3 files gglm.lib, main.exe and quantize.exe? Also use node lts version 18.15.0 |
Well I have 0.3.1 + 18.15.0 and my folder structor looks exactly as this instead in Windows 10 ( I don's see any "build" folder) https://raw.githubusercontent.com/cocktailpeanut/dalai/main/docs/alpaca_7b.png |
I'll try and get back to you after I do a clean install from scratch. |
Did everyone follow the steps for the Visual Studio installation if you're on windows? This needs visual studio and necessary components installed if you're on windows. Also run this in command prompt instead of powershell. |
Yes, I did all the steps. actually, you get a different error If there weren't installed. |
I deleted visual studio. I deleted all the things I had. I just ran a totally clean install on my laptop, again. and it just works fine for me. I have no idea what causes this error. |
Yes, all three are present
Will try
Yes, I had not installed it initially and it was throwing some other error. However I have only installed Desktop environment for c++ from VS. I use Scoop as my package manager on windows so I have python and NodeJS installed from there. |
I had the same issue and I was using Firefox. I switched to internet explorer and it work after that. |
Wow, that actually solved the issue. I was using Firefox too. |
Actually yeah, If I open it in Edge or Firefox private, works. But now the responses are wierd like |
also able to replicate same issue |
Had the same issue. I changed the index.js line 219 from:
to:
Works for me after this small hack, both NodeJS 19.8.1 and 18.15.0 |
Normal text - yes, intelligent text- not quite. |
I fixed it by adding the following line of code after line 303 in the file: bin/web/views/index.ejs config.model = document.querySelector('#model').value; |
Having this same issue on WSL, accessing the server through chrome on windows. Anyone know how I can browse to the index.js file on bash? cd home/USER/.npm doesn't work. Edit: Accessing on edge works fine |
Strange, all my attempts have failed with one or another error. Switching browsers doesn't seem to work for some reason, and any attempt to fix the error results in more ridiculous errors. I guess I'll have to wait until it's fixed. If you intrested, hear one of the attampt to fix code and error massages:
Result is:
|
I switched to Firefox and I am no longer getting this issue. I was using chrome previously. |
i did this multiple times, to no avail. still getting |
Switching to incognito in edge browser seems to work. Will have to try other browsers as well. |
how curious... only the alpaca 7b model seems to work properly, but only if the browser wasn't previously tainted by llama 7b. running alpaca 7b in incognito/fresh browser session fixes the error |
so it seems like the problem with llama 7B was that it wasn't converted into ggml or w/e. i dug into the program files, found a directory for llama.cpp, and followed the usage directions after rerunning |
hm... even being able to load llama 7b, i can't get any prompts to process... but that might be a completely different issue |
Thank you! |
I switched to firefox and it solved the problem |
Switching to Chrome (from Edge) in incognito mode works for me, but only the Alpaca 7B model shows up in the dropdown, though it shows:
Wonder why the Llama model doesn't show up even though it's listed, and found, no?? Or is it that it doesn't find it in the Llama folder, as it doesn't say "exists 7B" below? Here's the content of the folder:
The .pth file is 13.5GB |
if you want to get it to show up, you have to run through the usage process in the readme file under dalai/llama directory. i got it to show up by moving the folder into wsl, and going through the instructions, then moving it back. getting it to work is a different question tho, bc my prompts wouldn't process no matter how long i waited. idk why |
had the same issue and solved it. here's how i did it: open the webUI, go to inspect element, then to the storage tab and to local storage. there should be a cookie called 'config'. it should have an entry called 'models'. if its empty, thats the reason why 'models' doesn't show anything and why you get an error. it should be an array containing your model in the format: core.model for example, it could be ['llama.7B'], so change it to your core and model. 'config' may also have an entry called 'model', change it to core.model as well. then try running your prompt again. if it's still not working no matter how long you wait, check if you got the models downloaded. as soon as you click on 'Go' it executes:
as you see, it (in my case) needs the file models/7B/ggml-model-q4_0.bin. should be something similar in your case, you find this command in the terminal after clicking 'Go', so check how that file is called, and check if this file exists in your llama folder. if not, search for it on the internet and download it and put it in it's place. if it's still not working after making sure you got the file, it may be because your model is too old (at least that was the case for me), in that case, you'll find a solution here: ggml-org/llama.cpp#361 you can also try executing the command from your terminal instead of using the webUI, to see what errors you get. hope this helps! |
I was getting the same issue. Switching to firefox worked for me! |
I was getting the same issue. Switching to firefox not worked for me! |
I followed the exact instructions to install the models. I am able to get the Web UI up and running, but when I try to submit a prompt, I get this error:
The text was updated successfully, but these errors were encountered: