-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add together ai plugin (Py SDK) #1363
base: master
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
if isinstance(response, t.Iterator): | ||
chunk: ChatCompletionChunk = next(response) | ||
completion: ChatCompletionResponse = ChatCompletionResponse( | ||
**chunk.model_dump() | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The handle_tool_calls()
method only processes the first chunk when given an iterator, discarding remaining chunks which could contain additional tool calls. Should iterate through all chunks.
📝 Committable Code Suggestion
‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
if isinstance(response, t.Iterator): | |
chunk: ChatCompletionChunk = next(response) | |
completion: ChatCompletionResponse = ChatCompletionResponse( | |
**chunk.model_dump() | |
) | |
if isinstance(response, t.Iterator): | |
completion = None | |
for chunk in response: | |
if completion is None: | |
completion = ChatCompletionResponse(**chunk.model_dump()) | |
else: | |
completion.merge_chunk(chunk) |
client = Together( | ||
api_key="59114908afa42e6b013964084e5b5fe52d375b8b2418e848d3f16bcee96f905a" | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
API key is hardcoded in the source code, creating a security vulnerability. Store sensitive credentials in environment variables or a secure configuration file.
📝 Committable Code Suggestion
‼️ Ensure you review the code suggestion before committing it to the branch. Make sure it replaces the highlighted code, contains no missing lines, and has no issues with indentation.
client = Together( | |
api_key="59114908afa42e6b013964084e5b5fe52d375b8b2418e848d3f16bcee96f905a" | |
) | |
client = Together( | |
api_key=os.getenv('TOGETHER_API_KEY') | |
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❌ Changes requested. Reviewed everything up to 350c61c in 2 minutes and 42 seconds
More details
- Looked at
319
lines of code in5
files - Skipped
0
files when reviewing. - Skipped posting
6
drafted comments based on config settings.
1. python/plugins/togetherai/README.md:7
- Draft comment:
Objective in README (line 7) mentions automating GitHub repo starring while later examples (e.g. in demo) send an email. Update for consistency. - Reason this comment was not posted:
Comment looked like it was already resolved.
2. python/plugins/togetherai/README.md:72
- Draft comment:
The code snippet uses 'pprint(result)' without importing pprint. Consider adding 'from pprint import pprint' to avoid a NameError. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50%
This is a documentation file showing example code. While technically correct that pprint needs to be imported, this is a minor detail in what's meant to be a high-level example. The focus is on demonstrating the Together AI integration flow, not Python import specifics. Documentation often omits obvious imports for brevity.
The comment is technically accurate - the code would fail without the import. Missing imports could confuse beginners trying to follow the example.
While true, README examples commonly omit basic imports for readability. The pprint import is basic Python knowledge and its omission doesn't obscure the main concepts being demonstrated.
Delete the comment. The missing pprint import is too minor an issue for a README example where the focus is on demonstrating the Together AI integration flow.
3. python/plugins/togetherai/composio_togetherai/toolset.py:42
- Draft comment:
Typo in the error message: 'handelling' should be 'handling'. - Reason this comment was not posted:
Marked as duplicate.
4. python/plugins/togetherai/togetherai_demo.py:6
- Draft comment:
Avoid hardcoding API keys in source code. Instead, use environment variables or a secure configuration mechanism to prevent accidental exposure. - Reason this comment was not posted:
Marked as duplicate.
5. python/plugins/togetherai/README.md:56
- Draft comment:
Typographical issue: The model name is written as 'gpt-4o'. If this was meant to be 'gpt-4', please correct it to avoid confusion. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50%
This is a Together AI integration example, not OpenAI. Together AI likely has its own model naming conventions. Without knowing Together AI's model names, we can't be sure "gpt-4o" is incorrect. The comment makes assumptions based on OpenAI naming that may not apply here.
I could be wrong if "gpt-4o" is definitely a typo and Together AI only uses "gpt-4" naming. Maybe there's a standard naming convention I'm unaware of.
Without clear documentation of Together AI's model names, we shouldn't assume this is a typo. The burden of proof is on the comment to show it's wrong.
Delete the comment since we don't have strong evidence that "gpt-4o" is incorrect in the Together AI context.
6. python/plugins/togetherai/composio_togetherai/toolset.py:42
- Draft comment:
Typographical error: the word 'handelling' in the error message should be corrected to 'handling'. - Reason this comment was not posted:
Marked as duplicate.
Workflow ID: wflow_2GmpJtFNSTSGoqkw
Want Ellipsis to fix these issues? Tag @ellipsis-dev
in a comment. You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
): | ||
raise InvalidEntityIdError( | ||
"separate `entity_id` can not be provided during " | ||
"initialization and handelling tool calls" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo in error message: 'handelling' should be 'handling'.
"initialization and handelling tool calls" | |
"initialization and handling tool calls" |
|
||
|
||
client = Together( | ||
api_key="59114908afa42e6b013964084e5b5fe52d375b8b2418e848d3f16bcee96f905a" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid hardcoding API keys in code; use environment variables or configuration files for security.
|
||
def handle_tool_calls( | ||
self, | ||
response: ChatCompletionResponse | t.Iterator[ChatCompletionChunk], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using the union operator '|' in type annotations limits compatibility to Python 3.10+. Since python_requires is '>=3.9', consider using Union[...] for broader compatibility.
response: ChatCompletionResponse | t.Iterator[ChatCompletionChunk], | |
response: t.Union[ChatCompletionResponse, t.Iterator[ChatCompletionChunk]], |
""" | ||
entity_id = self.validate_entity_id(entity_id or self.entity_id) | ||
outputs = [] | ||
if isinstance(response, t.Iterator): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When handling streaming responses, only the first chunk is processed using next(response). Additionally, using isinstance(response, t.Iterator) may not work as expected since t.Iterator is not runtime-checkable. Consider using collections.abc.Iterator and iterating over all chunks if multiple are expected.
if isinstance(response, t.Iterator): | |
if isinstance(response, collections.abc.Iterator): |
|
||
|
||
client = Together( | ||
api_key="59114908afa42e6b013964084e5b5fe52d375b8b2418e848d3f16bcee96f905a" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical security issue: API key is exposed in the code. This should be moved to an environment variable or secure configuration.
api_key="59114908afa42e6b013964084e5b5fe52d375b8b2418e848d3f16bcee96f905a"
Please update to use environment variables:
import os
api_key = os.getenv("TOGETHER_API_KEY")
my_task = "Star a repo composiohq/composio on GitHub" | ||
|
||
# Create a chat completion request to decide on the action | ||
response = client.chat.completions.create(model="gpt-4o", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The README.md references incorrect model name gpt-4o
. This should be updated to the correct Together AI model name.
_check_requested_actions=check_requested_actions, | ||
) | ||
|
||
def handle_tool_calls( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The handle_tool_calls
method lacks proper error handling for API failures and retries. Consider adding:
- Try-catch block for API calls
- Retry mechanism for failed requests
- Logging for debugging purposes
Example:
def handle_tool_calls(self, response: ChatCompletionResponse | t.Iterator[ChatCompletionChunk],
entity_id: t.Optional[str] = None,
check_requested_actions: bool = True,
max_retries: int = 3) -> t.List[t.Dict]:
try:
entity_id = self.validate_entity_id(entity_id or self.entity_id)
outputs = []
# ... rest of the code ...
except Exception as e:
logger.error(f"Error handling tool calls: {str(e)}")
raise
) | ||
] | ||
|
||
def execute_tool_call( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docstring for execute_tool_call
method is incomplete. Consider adding:
- Return type information
- Parameter descriptions for
check_requested_actions
- Examples of usage
- Possible exceptions that could be raised
Example:
def execute_tool_call(
self,
tool_call: t.Union[ChatCompletionMessageToolCall, ToolCalls],
entity_id: t.Optional[str] = None,
check_requested_actions: bool = True,
) -> t.Dict:
"""Execute a tool call from Together AI.
Args:
tool_call: Tool call metadata from Together AI response
entity_id: Entity ID to use for executing the function call
check_requested_actions: Whether to validate against requested actions
Returns:
Dict containing output data from the tool call
Raises:
ValueError: If tool call function is None
InvalidEntityIdError: If entity ID validation fails
"""
Code Review SummaryCritical Issues
Recommendations
The PR adds valuable functionality but needs these improvements before it can be safely merged. Please address the security issue as a priority. |
Important
Add Together AI plugin for Composio to automate GitHub interactions and demonstrate email sending.
composio_togetherai
.README.md
provides installation and usage instructions.setup.py
configures the plugin package.ComposioToolSet
class intoolset.py
for managing tool calls.togetherai_demo.py
demonstrates sending an email using the plugin.This description was created by
for 350c61c. It will automatically update as commits are pushed.