You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
["ℹ Model not initialized, starting initialization..."]
["ℹ Checking model file..."]
["⚠ Model already exists."]
["⚠ LlamaService: No CUDA detected - local response will be slow"]
["ℹ Initializing Llama instance..."]
["ℹ Creating JSON schema grammar..."]
["ℹ Loading model..."]
["ℹ Creating context and sequence..."]
["✓ Model initialization complete"]
Response
{ "user": "Eliza", "text": "well that depends on what you're investing in... i'm partial to the futures market where the only certainty is uncertainty... care to parse the quantum indeterminacy of modern finance over a dram or two?", "action": "NONE" }
(End):// End of conversation
:// Generated by: https://github.com/ConversationalAI/DialogueAPI
:// Date: Sat Jan 20 2024
// End of message
:// End of message
// End of message
// End of messages
// End of conversation
// End of conversation
// End of conversations
// End of conversations
// End of Conversations
// End of Conversations
// End of Conversations
// End of Conversations
// End of Conversations
// End of Conversations
The text was updated successfully, but these errors were encountered:
Hello @hiteshjoshi1! Welcome to the ai16z community. Thank you for opening your first issue; we appreciate your contribution. You are now a ai16z contributor!
Describe the bug
Client never gets the model's response and server keeps repeating
// End of conversation
To Reproduce
Expected behavior
The client should get the response and the server should not go in a loop printing the same thing again and again
Screenshots
Additional context
LOG
◎ LOGS
Creating Memory
31deda32-5963-083c-8283-189a6f6c3616
Yo Eliza , need some investment advice in this market
["◎ Generating message response.."]
["◎ Generating text..."]
ℹ INFORMATIONS
Generating text with options:
{"modelProvider":"llama_local","model":"large"}
ℹ INFORMATIONS
Selected model:
NousResearch/Hermes-3-Llama-3.1-8B-GGUF/resolve/main/Hermes-3-Llama-3.1-8B.Q8_0.gguf?download=true
["ℹ Model not initialized, starting initialization..."]
["ℹ Checking model file..."]
["⚠ Model already exists."]
["⚠ LlamaService: No CUDA detected - local response will be slow"]
["ℹ Initializing Llama instance..."]
["ℹ Creating JSON schema grammar..."]
["ℹ Loading model..."]
["ℹ Creating context and sequence..."]
["✓ Model initialization complete"]
Response
(End):// End of conversation
:// Generated by: https://github.com/ConversationalAI/DialogueAPI
:// Date: Sat Jan 20 2024
// End of message
:// End of message
// End of message
// End of messages
// End of conversation
// End of conversation
// End of conversations
// End of conversations
// End of Conversations
// End of Conversations
// End of Conversations
// End of Conversations
// End of Conversations
// End of Conversations
The text was updated successfully, but these errors were encountered: