You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I was curious whether it was possible to run models locally via vLLM. The README mentions HF TGI for running local models. Looking through the experimental dspy branch it seems that HF TGI is chosen for the model if an OpenAI model is not provided. Should I modify the experimental branch to add vLLM support or is there another way to run local models on vLLM?
The text was updated successfully, but these errors were encountered:
Ideally DSPy handles all of this, and IReRa just uses whatever LLM you supply. To run with vLLM for now, it is indeed best to change how the models are created in the irera branch on DSPy. I'd need to think of a more scalable way of taking care of model providers long-term.
Hello, I was curious whether it was possible to run models locally via vLLM. The README mentions HF TGI for running local models. Looking through the experimental dspy branch it seems that HF TGI is chosen for the model if an OpenAI model is not provided. Should I modify the experimental branch to add vLLM support or is there another way to run local models on vLLM?
The text was updated successfully, but these errors were encountered: