Fix the logic for creating LLM adapters. #97
+19
−39
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
As it was implemented, it was requiring devs to specify as pip requirements all optional LLM dependencies—google, anthropic, openai—regardless of which subset the dev actually needed.
The reason was that attempts to
import
frompipecat.services.<llm>
would actually result in a different exception raised (Exception(f"Missing module: {e}")
) than the one that was being handled (ImportError
).But something else was amiss here. By even attempting to
import
from an LLM service module we didn't need—say,pipecat.services.anthropic
—we would inadvertently trigger some scary logger errors saying that the module and corresponding API key were missing, even though they're not needed.It's worth noting there's no safety-related reason to import any LLM service modules here: if the dev has forgotten to install the appropriate dependency, they'll hit the error when trying to create the LLM service—e.g.
AnthropicLLMService
—in the first place.