-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: new LLM binding #74
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This code introduces a new LLM binding for Deepcode and provides a good starting point. Here's a breakdown of improvements and suggestions for next steps: **Improvements made:** * **Output Format Handling:** The introduction of `OutputFormat` and the `WithOutputFormat` option provides a clean way to manage the desired output format. The default is now `MarkDown`, and the code handles invalid formats gracefully by sticking with the default. * **Testability:** The tests cover various aspects of the binding, including default values, option setting, and constant validation, which is good practice. * **Flexibility:** The use of functional options for configuration (`WithHTTPClient`, `WithLogger`, `WithOutputChannel`) makes the binding more flexible and easier to extend in the future. * **Interface `SnykLLMBindings`**: Introducing an interface provides a clear contract for LLM bindings, improving code structure and enabling potential support for different LLMs in the future. **Suggestions for next steps:** * **Implement `Explain` and `PublishIssues`:** These are placeholder methods that panic. You'll need to implement the actual logic to interact with the Deepcode LLM. This will likely involve making HTTP requests to the Deepcode API. Consider using a library for managing API interactions to simplify error handling and authentication. * **Error Handling:** The current implementation of `PublishIssues` and `Explain` should return errors instead of panicking. This will allow calling code to handle errors gracefully. Define specific error types for better error handling and logging. * **Contextual Information for `Explain`:** The `Explain` method could benefit from additional context, like the code snippet relevant to the explanation. Consider adding parameters to provide this context to the LLM. * **Input Validation:** Add validation for the `input` parameter in `Explain` to prevent issues with empty or malformed input. * **Concurrency Control:** If you anticipate concurrent usage of the binding, consider adding appropriate synchronization mechanisms (e.g., mutexes) to protect shared resources. * **Output Streaming:** The `output` channel in `Explain` suggests streaming. Ensure the implementation handles streaming correctly and efficiently. You might want to buffer output or implement backpressure mechanisms to prevent overwhelming the consumer. * **Retry Logic:** Network requests can fail. Implement retry logic with exponential backoff in the `Explain` and `PublishIssues` methods to handle transient errors. **Example of Implementing `Explain` (Conceptual):** ```go func (d *DeepcodeLLMBinding) Explain(input string, format OutputFormat, output chan<- string) error { // 1. Construct the request to the Deepcode API reqBody := map[string]interface{}{ "input": input, "format": string(format), // Convert OutputFormat to string // ... any other required parameters ... } // 2. Make the HTTP request resp, err := d.httpClientFunc().Do(...) // Construct the request using reqBody if err != nil { return fmt.Errorf("request failed: %w", err) } defer resp.Body.Close() // 3. Handle the response and stream the output if resp.StatusCode != http.StatusOK { body, _ := io.ReadAll(resp.Body) // Read the error response body for logging return fmt.Errorf("Deepcode API returned %s: %s", resp.Status, string(body)) } scanner := bufio.NewScanner(resp.Body) for scanner.Scan() { output <- scanner.Text() } if err := scanner.Err(); err != nil { return fmt.Errorf("error reading response: %w", err) } close(output) // Close the channel to signal completion return nil } ``` This example shows the general flow. The actual implementation will depend on the specifics of the Deepcode API. Remember to replace the placeholders with actual API endpoints, request construction, and authentication details. Error handling and streaming should be robust to handle various scenarios.
mostly inspired & copied over from snyk/snyk-ls#754
The provided diff shows that the code for the `Option` functions (like `WithHTTPClient`, `WithEndpoint`, etc.) and the related type definition have been moved from `llm/binding.go` to a new file `llm/options.go`. This is a good refactoring as it separates the concerns of options management from the core LLM binding logic. The change itself is straightforward and doesn't introduce any logical modifications, only structural ones.
The main changes are: * **Removed `outputChannel`:** This is now handled by the `output` channel passed directly to the `Explain` method. * **`Explain` now takes a string and unmarshals to `ExplainOptions`:** This allows for more flexible input. * **`runExplain` now returns a string:** This simplifies the return type and aligns with the expected output format. * **`ExplainWithOptions` added:** This method takes `ExplainOptions` and returns the explanation string directly. * **Simplified `explainRequestBody`:** Removed unnecessary conditional logic and directly constructs the request based on provided options. * **Updated tests:** Adjusted tests to reflect the changes in function signatures and return types. Fixed the issue with using `http.RoundTripper` interface incorrectly. It needed to be a function that returns a `http.RoundTripper`. Also removed unnecessary test case (error getting response already covered by other test) and added a test for `ExplainWithOptions` This revised approach makes the code cleaner and more aligned with the intended usage pattern. The removal of the `outputChannel` from the struct simplifies the `DeepcodeLLMBinding` and makes the flow of explanation data more direct. The `Explain` function now clearly takes the input and provides the output through the channel provided, which is a more conventional way of handling streaming/asynchronous responses.
ShawkyZ
approved these changes
Feb 12, 2025
Is this an experiment or meant to be there? |
It is not an experiment. See: https://snyksec.atlassian.net/wiki/spaces/IDE/pages/2749301102/IDE+Ai+Fix+in+IDE+improvements |
Any reason why this is not implemented in GAF? |
Yes, because it is deepcode/code specific, not general. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This provides an api client for the Explain API.
Checklist
🚨After having merged, please update the
snyk-ls
and CLI go.mod to pull in latest client.