Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: subgraphs #41

Merged
merged 28 commits into from
Feb 17, 2025
Merged

feat: subgraphs #41

merged 28 commits into from
Feb 17, 2025

Conversation

lgalabru
Copy link
Member

@lgalabru lgalabru commented Feb 17, 2025

Summary by CodeRabbit

  • New Features

    • Introduced a comprehensive GraphQL API with query, mutation, and subscription endpoints for real-time interaction.
    • Added a CLI watch mode for dynamic monitoring during execution.
    • Launched new RPC and plugin management interfaces.
    • Enabled subgraph integration for live event and account update notifications.
    • Enhanced error handling and user feedback during runbook execution.
    • Added new metadata and dependencies for improved project configuration.
  • Refactor

    • Consolidated dependency management and restructured modules for improved stability and clearer communication.
    • Enhanced transaction processing with detailed feedback.
  • Tests / Chores

    • Updated test configurations and package metadata.
    • Added new configuration files to support the GraphQL and subgraph modules.

Copy link

coderabbitai bot commented Feb 17, 2025

Warning

Rate limit exceeded

@lgalabru has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 0 minutes and 28 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between c30bc7a and 033a4ce.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (10)
  • Cargo.toml (2 hunks)
  • crates/cli/src/http/mod.rs (4 hunks)
  • crates/core/src/lib.rs (1 hunks)
  • crates/core/src/rpc/full.rs (4 hunks)
  • crates/core/src/simnet/mod.rs (15 hunks)
  • crates/core/src/test_helpers.rs (5 hunks)
  • crates/core/src/types.rs (5 hunks)
  • crates/core/tests/integration.rs (4 hunks)
  • crates/gql/src/subscription.rs (1 hunks)
  • crates/subgraph/src/lib.rs (1 hunks)

Walkthrough

This pull request introduces extensive modifications across the project. Workspace metadata and dependency definitions are updated in multiple Cargo.toml files, adding new packages such as surfpool-gql and surfpool-subgraph. The CLI has been enhanced with a new watch flag and refined event handling, while the HTTP module now includes additional asynchronous GraphQL endpoints and routes. Core modules have been refactored to use workspace dependencies, update RPC interfaces, and restructure simnet command/event handling. New GraphQL types and resolvers have been added, and a subgraph plugin implementing the GeyserPlugin trait is introduced along with its configuration file.

Changes

File(s) Changes Summary
Cargo.toml (root), crates/cli/Cargo.toml, crates/core/Cargo.toml, crates/gql/Cargo.toml, crates/subgraph/Cargo.toml Updated workspace metadata and dependency configurations; added new members (crates/gql, crates/subgraph) and numerous dependencies (e.g., surfpool-gql, solana-*, ipc-channel, serde, etc.).
crates/cli/src/cli/mod.rs Added a new watch flag to the StartSimnet struct enabling a command-line option for monitoring programs.
crates/cli/src/cli/simnet/mod.rs Augmented subgraph configuration, enhanced event handling, and integrated file watching with refined transaction event processing.
crates/cli/src/http/mod.rs Modified the start_server signature to include subgraph channels and added asynchronous functions (post_graphql, get_graphql, subscriptions, playground, graphiql) for GraphQL request handling.
crates/cli/src/runbook/mod.rs Updated execute_runbook for improved snapshot state handling and feedback; added display_snapshot_diffing for detailed change reporting.
crates/cli/src/tui/simnet.rs Moved event types, renamed transaction events (from TransactionReceived to TransactionSimulated), and added a new PluginLoaded event.
crates/core/src/lib.rs, crates/core/src/rpc/, crates/core/src/simnet/mod.rs, crates/core/src/types.rs* Revised dependency management to workspace references; expanded RPC interfaces (added AdminRpc, modified Full RPC) and restructured simnet command/event handling; introduced new types for subgraph, simnet, and plugin manager commands.
crates/gql/src/ Introduced a new GraphQL module including context, schema, mutations, queries, and subscriptions with API versioning and real-time update capabilities.
crates/subgraph/src/lib.rs, surfpool_subgraph_plugin.json Added a new subgraph plugin implementing the GeyserPlugin trait with methods for plugin lifecycle and event notifications; provided corresponding JSON configuration for integration.

Sequence Diagram(s)

sequenceDiagram
    participant C as Client
    participant HS as HTTP Server (start_server)
    participant GR as GraphQL Resolver
    participant CTX as GqlContext/SchemaDatasource
    C->>HS: HTTP Request (/gql/v1/...)
    HS->>CTX: Initialize context and schema
    HS->>GR: Route request via post_graphql/get_graphql/subscriptions etc.
    GR->>CTX: Resolve query/mutation
    CTX-->>GR: Return data or error
    GR-->>HS: Return GraphQL response
    HS-->>C: Send HTTP response
Loading
sequenceDiagram
    participant U as User/CLI
    participant LS as Core (start_simnet)
    participant SM as Simnet Module
    participant SC as Subgraph Channel
    participant PM as Plugin Manager
    U->>LS: Invoke start_simnet with config & command channels
    LS->>SM: Forward simnet_commands & subgraph_commands
    SM->>SC: Process subgraph events (e.g., TransactionSimulated, PluginLoaded)
    SM->>PM: Relay plugin management commands
    PM-->>SM: Acknowledge updates
    SM-->>LS: Emit simulation events and status
    LS->>U: Return simulation status
Loading

Possibly related PRs

  • feat: implement tui #2: The changes in the main PR, which involve adding new dependencies and updating the Cargo.toml file, are related to the retrieved PR as both involve modifications to the Cargo.toml file, specifically adding the same dependency surfpool-gql.
  • Unit tests for minimal and full RPCs #40: The changes in the main PR, which involve adding and updating dependencies in the Cargo.toml file, are related to the modifications in the retrieved PR that also involve changes to the Cargo.toml file, specifically the addition of a new dependency.
  • getTransaction #18: The changes in the main PR, which involve adding new dependencies and updating the workspace configuration in Cargo.toml, are related to the modifications in the retrieved PR that also update transaction handling and introduce new types in crates/core/src/rpc/full.rs and crates/core/src/simnet/mod.rs.

Poem

Hop, hop, I’m a rabbit, with code oh so fleet,
Dependencies updated, making our project complete.
GraphQL queries and simnet commands now twirl in delight,
With plugins and subgraphs taking flight.
I twitch my nose at every clever commit and tweak,
Celebrating change with each byte I sneak.
Happy hops to progress, in code we trust! 🐰✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@lgalabru lgalabru marked this pull request as ready for review February 17, 2025 13:30
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (34)
crates/gql/src/subscription.rs (3)

20-27: Consider using &str instead of &String for parameters.
In many cases in Rust, a string reference (&str) is preferable over &String to avoid unnecessary indirection.

- pub fn new(name: &String, entry: &Entry) -> Self {
+ pub fn new(name: &str, entry: &Entry) -> Self {

80-86: Minor style improvement for parameters.
Similar to EntryData::new, consider using &str instead of &String here as well, unless there's a specific reason to require &String.


118-137: Duplicate implementation logic.
DynamicSubscription appears to replicate the Subscription::entries_event pattern entirely. If different behavior isn’t planned, consider reducing duplication by reusing shared logic or consolidating into one subscription type.

crates/core/src/types.rs (3)

48-52: Spelling error in the variant Rountrip.
Consider correcting to Roundtrip for clarity.

- Rountrip(Uuid),
+ Roundtrip(Uuid),

51-51: Unused comment in variant ApplyEntry.
The commented-out portion hints at an incomplete or planned parameter. Remove or complete it if necessary.

- ApplyEntry(Uuid, String), //, SubgraphRequest, u64),
+ ApplyEntry(Uuid, String),

130-131: SubgraphConfig struct is empty.
If more fields are planned, consider marking them as TODO or removing the struct until needed.

crates/gql/src/query.rs (1)

40-49: Use of uuid.clone()
Cloning a Uuid is inexpensive but you can often pass it by value as Uuid implements Copy. Consider that if performance or cleanliness is a concern.

- pub fn new(uuid: &Uuid, name: &str) -> Self {
-     Self {
-         name: name.to_case(Case::Pascal),
-         subgraph_uuid: uuid.clone(),
-         description: None,
-         fields: vec![],
-     }
+ pub fn new(uuid: Uuid, name: &str) -> Self {
+     Self {
+         name: name.to_case(Case::Pascal),
+         subgraph_uuid: uuid,
+         description: None,
+         fields: vec![],
+     }
 }
crates/subgraph/src/lib.rs (4)

18-24: Consider adding doc comments for the new struct
Documenting each field's purpose helps improve maintainability and clarity.


30-41: Improve error handling in on_load
Currently, serde_json::from_str(...).unwrap() may panic on invalid config. Consider returning a descriptive PluginResult error instead.

-fn on_load(&mut self, config_file: &str, _is_reload: bool) -> PluginResult<()> {
-    let config = serde_json::from_str::<SubgraphPluginConfig>(&config_file).unwrap();
+fn on_load(&mut self, config_file: &str, _is_reload: bool) -> PluginResult<()> {
+    let config = serde_json::from_str::<SubgraphPluginConfig>(&config_file)
+        .map_err(|e| GeyserPluginError::ConfigFileError(e.to_string()))?;

42-43: Empty on_unload method
Please confirm if resource cleanup or final logging is needed to avoid leaks or orphaned resources.


48-67: Unreachable code in update_account
These arms for ReplicaAccountInfoVersions::V0_0_1 and V0_0_2 remain unused. If future support is not planned, consider removing them.

crates/core/src/rpc/admin.rs (2)

16-119: Many unimplemented trait methods
Ensure they’re handled or stubbed out before production to prevent runtime issues.


121-247: load_plugin method uses unwrap
This can panic if config_file is invalid JSON. Also, consider making the 10-second timeout configurable.

crates/cli/src/runbook/mod.rs (1)

1-192: Enhancements in execute_runbook

  1. The runbook state loading logic is thorough.
  2. Consider robust error handling around dialoguer’s Confirm::interact() to avoid potential panics.
crates/cli/src/http/mod.rs (4)

14-21: Possible naming confusion.
The event name SchemaDatasourceingEvent might be incomplete or a leftover spelling. Consider renaming it to something like SchemaDataSourcingEvent or another more descriptive name.


32-36: Unused _ctx parameter.
The _ctx parameter is defined but not used in the function body. If not required, consider removing or renaming it.


39-47: Consider removing the Option wrapper around the schema.
Currently, it is always set to Some(...). If you do not expect None, one can simplify the logic by removing Option.

- let schema = RwLock::new(Some(new_dynamic_schema(schema_datasource.clone())));
+ let schema = RwLock::new(new_dynamic_schema(schema_datasource.clone()));

228-245: Configurable keep-alive interval.
You're hardcoding a 15-second interval for WebSocket keep-alive. Expose this setting for production flexibility if necessary.

crates/cli/src/cli/simnet/mod.rs (6)

21-24: File watching with notify.
Using notify is a robust approach but be mindful of potential performance overhead when watching large directories.


100-103: Unbounded channels usage.
crossbeam::channel::unbounded() risks unbounded memory growth if traffic is high. Consider a bounded channel if load might spike.


104-104: Port override.
DEFAULT_EXPLORER_PORT is hardcoded. Provide a configuration option if multiple services share ports or if user needs to override.


187-188: File watcher thread.
One-second polling might be fine for small changes, but consider a custom interval or event debouncing for heavier usage.


188-200: Comparing file contents.
with_compare_contents(true) can be expensive for big files. Evaluate if filename-based triggers suffice.


252-252: Cleanup placeholder.
Line comment indicates state cleanup. Ensure finalizing steps actually occur, if needed.

crates/core/src/simnet/mod.rs (4)

270-303: Plugin config path logic.
Much code is commented out. If you plan to parse from JSON, handle all edge cases thoroughly.


486-495: TransactionReceived command.
The transaction is stored under EntryStatus::Received. Confirm if duplicates or replayed transactions should be handled differently.


541-558: Error logging.
Logging all simulation failures is good. Consider including additional detail about the failing instruction or relevant account for easier debugging.


601-604: Post-processing.
Marking transactions as Success(Confirmed) is accurate. Consider additional states if final irreversibility differs from local confirmation.

crates/core/src/rpc/full.rs (3)

378-384: Check for potential error from simnet_commands_tx.send.

The result of send(...) is ignored, which could mask channel closure errors or concurrency issues.

Consider handling the send result:

- let _ = ctx.simnet_commands_tx.send(...)
+ if let Err(e) = ctx.simnet_commands_tx.send(...) {
+     log::error!("Failed to send TransactionReceived command: {:?}", e);
+ }

387-398: Consolidate repeated error handling for SimulationFailure.

Error formatting for simulation failures (logs and message text) largely duplicates the logic in the execution failure branch. Consider extracting a helper function to maintain consistency and reduce redundancy.


399-410: Consolidate repeated error handling for ExecutionFailure.

This block closely mirrors the simulation failure handling. A shared helper could streamline both branches and ensure synchronized enhancements if error reporting evolves later.

crates/gql/src/mutation.rs (1)

4-13: Consider referencing the version from a single constant.

Returning "1.0" directly works, but if the API version changes in multiple places, it might be easy to miss an update. Using a constant or configuration variable would improve maintainability.

crates/gql/src/lib.rs (1)

14-19: Consider making the broadcast channel size configurable.

The broadcast channel size is hardcoded to 128 in the constructor. Consider making this configurable to allow tuning based on load and requirements.

 #[derive(Clone, Debug)]
 pub struct Context {
     pub uuid_lookup: Arc<RwLock<BTreeMap<Uuid, String>>>,
     pub entries_store: Arc<RwLock<BTreeMap<String, (Uuid, Vec<EntryData>)>>>,
-    pub entries_broadcaster: tokio::sync::broadcast::Sender<EntryUpdate>,
+    pub entries_broadcaster: tokio::sync::broadcast::Sender<EntryUpdate>,
+    pub broadcast_channel_size: usize,
 }
Cargo.toml (1)

21-66: Unified Dependency Management via Workspace Dependencies

A large number of dependencies—especially the Solana-related ones—are now either pinned to version "2.1.10" or routed via workspace references. This approach centralizes dependency control and enforces version consistency across modules.
Consider reviewing whether the hard pinning to "2.1.10" is ideal for future maintainability or if a more flexible versioning strategy (or a central version variable) might be beneficial.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6f189c5 and 1e95232.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (22)
  • Cargo.toml (2 hunks)
  • crates/cli/Cargo.toml (2 hunks)
  • crates/cli/src/cli/mod.rs (1 hunks)
  • crates/cli/src/cli/simnet/mod.rs (13 hunks)
  • crates/cli/src/http/mod.rs (4 hunks)
  • crates/cli/src/runbook/mod.rs (5 hunks)
  • crates/cli/src/tui/simnet.rs (3 hunks)
  • crates/core/Cargo.toml (2 hunks)
  • crates/core/src/lib.rs (1 hunks)
  • crates/core/src/rpc/admin.rs (1 hunks)
  • crates/core/src/rpc/full.rs (2 hunks)
  • crates/core/src/rpc/mod.rs (4 hunks)
  • crates/core/src/simnet/mod.rs (13 hunks)
  • crates/core/src/types.rs (4 hunks)
  • crates/gql/Cargo.toml (1 hunks)
  • crates/gql/src/lib.rs (1 hunks)
  • crates/gql/src/mutation.rs (1 hunks)
  • crates/gql/src/query.rs (1 hunks)
  • crates/gql/src/subscription.rs (1 hunks)
  • crates/subgraph/Cargo.toml (1 hunks)
  • crates/subgraph/src/lib.rs (1 hunks)
  • surfpool_subgraph_plugin.json (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • surfpool_subgraph_plugin.json
  • crates/subgraph/Cargo.toml
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: build
🔇 Additional comments (93)
crates/gql/src/subscription.rs (5)

14-18: Struct definition looks good.
This struct straightforwardly captures the Entry and its name.


29-47: Check correctness of generated GraphQL type name and fields.
Ensuring the name(spec: &SchemaDatasourceEntry) function returns the intended entity name is crucial to avoid naming collisions in the GraphQL schema. The approach to pushing fields also looks correct for dynamic fields.


74-79: Struct definition for EntryUpdate is consistent.
It mirrors the structure of EntryData and sets up clarity for separate upstream usage.


89-95: GraphQL object for EntryUpdate is concise.
The uuid resolver matches the typical read-only accessor pattern.


96-115: Infinite stream loop can block graceful shutdown.
This loop will run indefinitely and yield new events until the service stops. Ensure there's a plan to break out or handle channel closure so the thread or task doesn’t block indefinitely.

crates/core/src/types.rs (12)

25-30: Struct Collection looks solid.
Capturing multiple entries under a shared UUID and name is straightforward.


32-36: Entry struct is clear.
Maintaining a UUID plus a dynamic map of key-value pairs is a flexible design.


38-46: Event variants are well-defined.
Log events and a shutdown variant adequately represent possible subgraph events.


54-59: SubgraphCommand usage.
The command variants for create, observe, and shutdown are comprehensive.


62-76: SimnetEvent enumerations are well-structured.
They cover a variety of simulation states, logs, and updates.


78-82: TransactionStatusEvent usage is clear.
It neatly distinguishes success, simulation failure, and execution failure with relevant data.


84-95: SimnetCommand is comprehensive.
Includes slot manipulation, clock updates, and transaction handling.


97-99: PluginManagerCommand loading logic.
Straightforward design for asynchronous plugin loading via channels.


101-106: ClockCommand states are minimal but effective.
The variants Pause, Resume, Toggle, and specifying slot intervals handle typical clock scenarios.


108-111: ClockEvent enumerations are basic and sufficient.
Provides a Tick event and an expiration event for blockhash.


113-117: Adding subgraph to SurfpoolConfig.
Expanding config to store subgraph settings is aligned with the rest of these changes.


146-151: SubgraphPluginConfig
Captures essential plugin config parameters effectively.

crates/gql/src/query.rs (8)

12-13: Query struct is a straightforward placeholder.
Serves as a root query object for the GraphQL schema.


15-30: SchemaDatasource
Captures a map of entries. The add_entry method uses to_case(Case::Camel) which might be worth verifying if you intend consistent naming everywhere.


32-38: SchemaDatasourceEntry definition is concise.
The fields name, subgraph_uuid, description, and fields provide a clear structure for each schema entry.


51-69: Check dynamic field creation in meta().
Iterating over spec.fields and building GraphQL fields is an effective approach for dynamic schema generation.


71-78: GraphQLValue implementation.
The type references SchemaDatasourceEntry itself, which is consistent with how the metadata is being built.


80-84: Query::new() is standard.
Creates an empty root query object, an acceptable pattern for frameworks like Juniper.


86-105: GraphQL type definition for Query.
Defining the apiVersion field is a neat addition, and pushing dynamic fields for subgraph entries is on track.


107-114: GraphQLValue for Query
Simply references the same SchemaDatasource info. The design is consistent.

crates/subgraph/src/lib.rs (11)

1-16: All imports appear consistent with plugin usage
No issues found in these lines.


25-29: Plugin name method is straightforward
No further feedback needed.


44-47: Empty notify_end_of_startup method
No concerns if this is intentionally a no-op for now.


69-76: Slot status update is minimal
This placeholder may suffice initially, but confirm if slot transitions also require logging or other processing.


154-156: Empty notify_entry
No concerns if future logic will be added later.


158-160: Empty notify_block_metadata
No concerns if future logic will be added later.


162-165: Account data notifications enabled
This aligns with the implemented update_account functionality.


166-169: Transaction notifications are disabled
Yet there's extensive logic in notify_transaction. Verify if this is the intended behavior.


170-173: Entry notifications disabled
Likely fine if not needed by this plugin.


175-177: Attributes for dynamic loading
#[no_mangle] and #[allow(improper_ctypes_definitions)] are standard for plugin exports.


178-184: Plugin creation function
Implementation appears correct for returning a valid GeyserPlugin pointer.

crates/core/src/rpc/admin.rs (1)

1-15: Imports for AdminRpc
No concerns here; these dependencies appear to be correctly declared.

crates/cli/src/runbook/mod.rs (1)

194-274: display_snapshot_diffing function
Provides a clear, categorized view of changes and potential re-executions.
Implementation is well-structured; no major issues observed.

crates/cli/src/http/mod.rs (6)

5-10: All new imports are appropriate.
No immediate issues are present.


147-151: GraphQL routing addition looks good.
Defining GET/POST/subscription routes under /gql/v1 is clear and maintains RESTful separation.


197-211: Potential .unwrap() panic in post_graphql.
schema.as_ref().unwrap() can cause a runtime panic if the Option is None. Validate or handle this scenario more gracefully.


213-226: Check .unwrap() usage in get_graphql.
Same concern as in post_graphql. If schema is None, .unwrap() will panic.


247-249: Playground route.
Having a dedicated route for interactive API exploration is helpful.


251-253: GraphiQL route.
This route mirrors the Playground approach; nothing problematic stands out.

crates/cli/src/cli/simnet/mod.rs (9)

2-2: New imports.
Adding PathBuf is standard; no problems evident.


6-6: mpsc and Arc.
Looks consistent with concurrency usage in the rest of the code.


33-35: Extended types.
Using SimnetEvent, SubgraphConfig, and SubgraphEvent is consistent with the new subgraph integration. Confirm all references are updated across the codebase.


88-88: Empty SubgraphConfig.
SubgraphConfig {} initializes with no fields. Ensure your logic doesn’t require initial fields for subgraph usage.


105-114: Server startup logic.
The error handling with .await and map_err is practical. No issues spotted.


116-118: Cloning context and config.
Straightforward duplication for thread usage; no concerns.


156-157: Deploy progress channel.
Another unbounded channel for progress messages. This is acceptable for smaller projects; just keep in mind potential queue growth.


256-262: Unified event logging.
Capturing simnet and subgraph events in the same logger is a good approach for consolidated debugging.


275-277: Stopping the explorer handle.
explorer_handle.stop(true) forcibly halts the server. Confirm you want an immediate stop if active requests exist.

crates/core/src/simnet/mod.rs (13)

5-5: Chrono usage.
Adopting Local and Utc is valid for consistent timestamps. No issues.


8-11: ipc_channel additions.
Provides advanced IPC. Check cross-platform compatibility and test thoroughly.


25-25: Transaction version.
Explicitly referencing TransactionVersion clarifies multi-version transaction support.


44-45: Re-exports for SurfpoolMiddleware.
Good approach to keep the codebase structured. No immediate issues.


46-49: Expanded references.
Linking SimnetCommand, SimnetEvent, and SubgraphCommand fosters synergy between simnet and subgraph logic.


188-188: PathBuf usage.
Ensure path references are sanitized and validated if controlled by the user.


194-194: start(...) signature extended.
Accepting subgraph-related channels is logical for plugin integration.


227-228: Plugin manager commands channel.
Ensure concurrency usage is thoroughly tested, especially with multiple plugin loads.


232-233: Crossbeam channels in middleware.
Attaching both simnet and plugin commands is a known concurrency pattern.


265-265: RouterProxy usage.
ipc_router = RouterProxy::new() is powerful for routing. Evaluate performance and potential security boundaries.


344-355: Subgraph plugin loading.
After dynamic loading, you dispatch a SubgraphCommand. Consider verifying the plugin is ready before usage.

Also applies to: 357-357


509-559: Conditional preflight checks.
Great for flexibility. If skip_preflight is on, be sure to handle or log invalid instructions that might appear later.


560-594: Transaction execution flow.
Plugins are updated on success; errors are also logged. Verify that partial plugin states don’t persist if plugin handling fails mid-transaction.

crates/core/src/rpc/full.rs (3)

2-2: Seems fine.

The new import of TransactionStatusEvent accurately reflects its usage for transaction status handling.


4-4: Importing Itertools for log message joining is acceptable.

No concerns; using join is a clear and concise approach.


412-412: Ensure no unhandled success statuses remain.

If future commitment levels or new success statuses are introduced, this loop may continue indefinitely. Verify that all success modes are covered or add a default case to avoid unexpected hangs.

Also applies to: 416-416, 420-420

crates/gql/src/mutation.rs (2)

1-2: Imports look good.

The Context import is critical for the #[graphql_object] annotations, and juniper_codegen usage is expected.


15-24: Parallel structure with Mutation is consistent.

Everything looks good here. Keeping the same api_version approach across both mutations is logical.

crates/core/src/lib.rs (2)

24-24: Imports aggregated into types are consistent.

No issues with grouping SimnetCommand, SimnetEvent, SubgraphCommand, and SurfpoolConfig.


28-28: Refined function signature for start_simnet.

Adding subgraph_commands_tx and simnet_commands_tx broadens the function’s responsibilities. Verify that channel usage is correctly orchestrated and well-tested to avoid deadlocks or dropped messages.

Also applies to: 30-30, 33-40

crates/gql/src/lib.rs (4)

12-12: Verify if the commented-out module is needed.

The types module is commented out. Please verify if this module should be uncommented or removed.


21-30: LGTM!

The Context implementation is clean and initializes all fields appropriately.


32-32: LGTM!

The empty implementation of juniper::Context is appropriate as it's a marker trait.


34-45: LGTM!

The schema type and constructor are well-implemented following Juniper's best practices.

crates/core/src/rpc/mod.rs (3)

28-33: LGTM!

The RunloopContext has been appropriately updated to use command-based communication, aligning with the broader restructuring of the codebase.


84-89: LGTM!

The SurfpoolMiddleware has been consistently updated to match the RunloopContext changes.


76-79: LGTM!

The imports have been appropriately updated to support the new command-based architecture.

crates/cli/src/cli/mod.rs (1)

109-111: LGTM!

The watch flag is well-documented and properly configured with appropriate clap attributes.

crates/cli/src/tui/simnet.rs (3)

21-21: LGTM!

The import path has been correctly updated to reflect the new module structure.


236-242: LGTM!

The PluginLoaded event handler is well-implemented with appropriate logging.


281-281: LGTM!

The event has been appropriately renamed to better reflect its purpose.

crates/gql/Cargo.toml (2)

1-10: Workspace Metadata Setup Looks Solid

All the package metadata keys (name, description, version, edition, license, repository, keywords, categories) are configured to inherit from the workspace. This consistent use of workspace settings simplifies maintenance across the project.


13-24: Clear and Consistent Dependency Declarations

The dependencies are defined with explicit versions and feature flags where needed. Using workspace references (e.g., for surfpool-core) ensures consistency with other modules. The commented-out entries (e.g., for txtx-addon-kit and an alternative juniper_codegen specification) are noted—make sure they are kept only as references if not needed.

crates/cli/Cargo.toml (2)

18-21: Integrating Workspace Dependencies for CLI

The addition of surfpool-gql and the update of both txtx-core and txtx-addon-network-svm to use workspace references aligns well with the modular, workspace-centric approach. Please verify that these changes integrate smoothly with the rest of the build.


46-52: New Dependencies for Enhanced Functionality

The inclusion of dependencies like notify, juniper_actix, juniper_graphql_ws, and juniper along with workspace-referenced packages such as ipc-channel and bincode is well-defined. Double-check that the specified version numbers and feature flags (e.g., "subscriptions" for juniper_actix and "graphql-transport-ws" for juniper_graphql_ws) meet your use-case requirements.

Cargo.toml (2)

1-4: Descriptive Workspace Package Information Added

The new description field in the [workspace.package] section provides a concise overview of the project. This is a helpful addition for documentation and new contributors.


11-17: Updated Workspace Members

The updated members array now includes "crates/gql" and "crates/subgraph", which accurately reflects the project’s expanded modular structure. Ensure that these new modules have been properly configured and tested within the workspace context.

crates/core/Cargo.toml (3)

17-23: Transition to Workspace Dependencies for Core Functionality

Updating dependencies such as bincode, serde, serde_derive, and serde_json to use workspace references enhances consistency in dependency management. Please ensure that the workspace-level definitions properly resolve these libraries.


41-71: Extensive Workspace Dependency Updates for Solana-Related Modules

The migration of numerous Solana-related dependencies (e.g., solana-sdk, solana-program-test, solana-client, etc.) to workspace references underscores a move toward centralized version management. Verify that all dependent modules and integration tests compile correctly against these workspace dependencies.


77-79: Addition of Further Dependencies

The inclusion of txtx-addon-network-svm, uuid, and ipc-channel with workspace references further unifies the dependency management strategy in surfpool-core. As with other modules, please confirm that these changes work harmoniously within the overall architecture.

Comment on lines 49 to 72
impl GraphQLValue<DefaultScalarValue> for EntryData {
type Context = Context;
type TypeInfo = SchemaDatasourceEntry;

fn type_name<'i>(&self, info: &'i Self::TypeInfo) -> Option<&'i str> {
<EntryData as GraphQLType>::name(info)
}

fn resolve_field(
&self,
info: &SchemaDatasourceEntry,
field_name: &str,
args: &Arguments,
executor: &Executor<Context>,
) -> ExecutionResult {
match field_name {
"uuid" => executor.resolve_with_ctx(&(), &self.entry.uuid.to_string()),
field_name => {
let value = self.entry.values.get(field_name).unwrap();
executor.resolve_with_ctx(&(), value)
}
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Avoid potential panic on missing keys.
Line 67 unwraps a value from self.entry.values without handling the case where field_name might be absent, which can cause a server panic. Consider returning a FieldError or a default value instead.

- let value = self.entry.values.get(field_name).unwrap();
+ let value = match self.entry.values.get(field_name) {
+     Some(v) => v,
+     None => return Err(FieldError::new(
+         format!("Field '{}' not found in entry values", field_name),
+         graphql_value!({ "invalid_field": field_name })
+     ))
+ };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
impl GraphQLValue<DefaultScalarValue> for EntryData {
type Context = Context;
type TypeInfo = SchemaDatasourceEntry;
fn type_name<'i>(&self, info: &'i Self::TypeInfo) -> Option<&'i str> {
<EntryData as GraphQLType>::name(info)
}
fn resolve_field(
&self,
info: &SchemaDatasourceEntry,
field_name: &str,
args: &Arguments,
executor: &Executor<Context>,
) -> ExecutionResult {
match field_name {
"uuid" => executor.resolve_with_ctx(&(), &self.entry.uuid.to_string()),
field_name => {
let value = self.entry.values.get(field_name).unwrap();
executor.resolve_with_ctx(&(), value)
}
}
}
}
impl GraphQLValue<DefaultScalarValue> for EntryData {
type Context = Context;
type TypeInfo = SchemaDatasourceEntry;
fn type_name<'i>(&self, info: &'i Self::TypeInfo) -> Option<&'i str> {
<EntryData as GraphQLType>::name(info)
}
fn resolve_field(
&self,
info: &SchemaDatasourceEntry,
field_name: &str,
args: &Arguments,
executor: &Executor<Context>,
) -> ExecutionResult {
match field_name {
"uuid" => executor.resolve_with_ctx(&(), &self.entry.uuid.to_string()),
field_name => {
- let value = self.entry.values.get(field_name).unwrap();
+ let value = match self.entry.values.get(field_name) {
+ Some(v) => v,
+ None => return Err(FieldError::new(
+ format!("Field '{}' not found in entry values", field_name),
+ graphql_value!({ "invalid_field": field_name })
+ ))
+ };
executor.resolve_with_ctx(&(), value)
}
}
}
}

Comment on lines +116 to +135
impl GraphQLValueAsync<DefaultScalarValue> for Query {
fn resolve_field_async(
&self,
info: &SchemaDatasource,
field_name: &str,
_arguments: &Arguments,
executor: &Executor<Context>,
) -> BoxFuture<ExecutionResult> {
let res = match field_name {
"apiVersion" => executor.resolve_with_ctx(&(), "1.0"),
subgraph_name => {
let database = executor.context();
let subgraph_db = database.entries_store.read().unwrap();
let entry = info.entries.get(subgraph_name).unwrap();
let (_, entries) = subgraph_db.get(subgraph_name).unwrap();
executor.resolve_with_ctx(entry, &entries[..])
}
};
Box::pin(async { res })
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Avoid unwrap() calls for store lookups.
Lines 129–130 could panic if the store is missing a key or if the read lock is poisoned. Consider returning an error result or logging gracefully instead of using unwrap().

- let (_, entries) = subgraph_db.get(subgraph_name).unwrap();
+ let entries = match subgraph_db.get(subgraph_name) {
+     Some((_, e)) => e,
+     None => return Err(juniper::FieldError::new(
+         "Subgraph not found",
+         graphql_value!({ "requested_subgraph": subgraph_name })
+     )),
+ };

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines 78 to 152
fn notify_transaction(
&self,
transaction: ReplicaTransactionInfoVersions,
slot: Slot,
) -> PluginResult<()> {
let Ok(tx) = self.subgraph_indexing_event_tx.lock() else {
return Ok(());
};
let tx = tx.as_ref().unwrap();
let Some(ref subgraph_request) = self.subgraph_request else {
return Ok(());
};
match transaction {
ReplicaTransactionInfoVersions::V0_0_2(data) => {
let _ = tx.send(SchemaDatasourceingEvent::ApplyEntry(
self.uuid,
data.signature.to_string(),
// subgraph_request.clone(),
// slot,
));
if data.is_vote {
return Ok(());
}
let Some(ref inner_instructions) = data.transaction_status_meta.inner_instructions
else {
return Ok(());
};
for inner_instructions in inner_instructions.iter() {
for instruction in inner_instructions.instructions.iter() {
let instruction = &instruction.instruction;
let decoded_data = bs58::decode(&instruction.data)
.into_vec()
.map_err(|e| GeyserPluginError::Custom(Box::new(e)))?;
// it's not valid cpi event data if there isn't an 8-byte signature
if decoded_data.len() < 8 {
continue;
}
let eight_bytes = decoded_data[0..8].to_vec();
let decoded_signature = bs58::decode(eight_bytes)
.into_vec()
.map_err(|e| GeyserPluginError::Custom(Box::new(e)))?;
for field in subgraph_request.fields.iter() {
match &field.data_source {
IndexedSubgraphSourceType::Instruction(
instruction_subgraph_source,
) => {
continue;
}
IndexedSubgraphSourceType::Event(event_subgraph_source) => {
if event_subgraph_source
.event
.discriminator
.eq(decoded_signature.as_slice())
{
println!(
"found event with match!!!: {:?}",
event_subgraph_source.event.name
);
let _ = tx.send(SchemaDatasourceingEvent::ApplyEntry(
self.uuid,
data.signature.to_string(),
// subgraph_request.clone(),
// slot,
));
}
}
}
}
}
}
}
ReplicaTransactionInfoVersions::V0_0_1(_) => {}
}
Ok(())
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Potential mismatch & concurrency considerations in notify_transaction

  1. The method is fully implemented, but transaction_notifications_enabled() returns false.
  2. There's a double base58 decode (lines 108 & 116) which could be unintentional.
  3. Consider explicit handling of Mutex lock failures rather than silently returning Ok(()).

Comment on lines 48 to 129
let _handle = hiro_system_kit::thread_named("subgraph")
.spawn(move || {
let mut observers = vec![];
loop {
let mut selector = Select::new();
let mut handles = vec![];
selector.recv(&subgraph_commands_rx);
for rx in observers.iter() {
handles.push(selector.recv(rx));
}
let oper = selector.select();
match oper.index() {
0 => match oper.recv(&subgraph_commands_rx) {
Err(_e) => {
// todo
}
Ok(cmd) => match cmd {
SubgraphCommand::CreateSubgraph(uuid, config, sender) => {
let mut gql_schema = gql_schema_copy.write().unwrap();
let subgraph_uuid = uuid;
let subgraph_name = config.subgraph_name.clone();
let mut schema =
SchemaDatasourceEntry::new(&subgraph_uuid, &subgraph_name);
for fields in config.fields.iter() {
schema.fields.push(fields.display_name.clone());
}
schema_datasource.add_entry(schema);
gql_schema.replace(new_dynamic_schema(schema_datasource.clone()));
use convert_case::{Case, Casing};

let gql_context = gql_context_copy.write().unwrap();
let mut entries_store = gql_context.entries_store.write().unwrap();
let mut lookup = gql_context.uuid_lookup.write().unwrap();
lookup.insert(
subgraph_uuid.clone(),
subgraph_name.to_case(Case::Camel),
);
entries_store.insert(
subgraph_name.to_case(Case::Camel),
(subgraph_uuid.clone(), vec![]),
);
let _ = sender.send("http://127.0.0.1:8900/graphql".into());
}
SubgraphCommand::ObserveSubgraph(subgraph_observer_rx) => {
observers.push(subgraph_observer_rx);
}
SubgraphCommand::Shutdown => {
let _ = subgraph_events_tx.send(SubgraphEvent::Shutdown);
}
},
},
i => match oper.recv(&observers[i - 1]) {
Ok(cmd) => match cmd {
SchemaDatasourceingEvent::ApplyEntry(
uuid,
value, /* , request, slot*/
) => {
let gql_context = gql_context_copy.write().unwrap();
let uuid_lookup = gql_context.uuid_lookup.read().unwrap();
let subgraph_name = uuid_lookup.get(&uuid).unwrap();
let mut entries_store = gql_context.entries_store.write().unwrap();
let (_uuid, entries) =
entries_store.get_mut(subgraph_name).unwrap();
let mut values = HashMap::new();
values.insert("message".to_string(), value);
let entry_uuid = Uuid::new_v4();
entries.push(EntryData {
name: subgraph_name.clone(),
entry: Entry {
uuid: entry_uuid,
values,
},
});
}
SchemaDatasourceingEvent::Rountrip(_uuid) => {}
},
Err(_e) => {}
},
}
}
Ok::<(), String>(())
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Thread loop continues after receiving shutdown signal.
When SubgraphCommand::Shutdown is received, you send SubgraphEvent::Shutdown but don’t break out of the loop. Consider breaking out to terminate gracefully.

- SubgraphCommand::Shutdown => {
-     let _ = subgraph_events_tx.send(SubgraphEvent::Shutdown);
- }
+ SubgraphCommand::Shutdown => {
+     let _ = subgraph_events_tx.send(SubgraphEvent::Shutdown);
+     break; // Break out of the loop to shut down gracefully
+ }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let _handle = hiro_system_kit::thread_named("subgraph")
.spawn(move || {
let mut observers = vec![];
loop {
let mut selector = Select::new();
let mut handles = vec![];
selector.recv(&subgraph_commands_rx);
for rx in observers.iter() {
handles.push(selector.recv(rx));
}
let oper = selector.select();
match oper.index() {
0 => match oper.recv(&subgraph_commands_rx) {
Err(_e) => {
// todo
}
Ok(cmd) => match cmd {
SubgraphCommand::CreateSubgraph(uuid, config, sender) => {
let mut gql_schema = gql_schema_copy.write().unwrap();
let subgraph_uuid = uuid;
let subgraph_name = config.subgraph_name.clone();
let mut schema =
SchemaDatasourceEntry::new(&subgraph_uuid, &subgraph_name);
for fields in config.fields.iter() {
schema.fields.push(fields.display_name.clone());
}
schema_datasource.add_entry(schema);
gql_schema.replace(new_dynamic_schema(schema_datasource.clone()));
use convert_case::{Case, Casing};
let gql_context = gql_context_copy.write().unwrap();
let mut entries_store = gql_context.entries_store.write().unwrap();
let mut lookup = gql_context.uuid_lookup.write().unwrap();
lookup.insert(
subgraph_uuid.clone(),
subgraph_name.to_case(Case::Camel),
);
entries_store.insert(
subgraph_name.to_case(Case::Camel),
(subgraph_uuid.clone(), vec![]),
);
let _ = sender.send("http://127.0.0.1:8900/graphql".into());
}
SubgraphCommand::ObserveSubgraph(subgraph_observer_rx) => {
observers.push(subgraph_observer_rx);
}
SubgraphCommand::Shutdown => {
let _ = subgraph_events_tx.send(SubgraphEvent::Shutdown);
}
},
},
i => match oper.recv(&observers[i - 1]) {
Ok(cmd) => match cmd {
SchemaDatasourceingEvent::ApplyEntry(
uuid,
value, /* , request, slot*/
) => {
let gql_context = gql_context_copy.write().unwrap();
let uuid_lookup = gql_context.uuid_lookup.read().unwrap();
let subgraph_name = uuid_lookup.get(&uuid).unwrap();
let mut entries_store = gql_context.entries_store.write().unwrap();
let (_uuid, entries) =
entries_store.get_mut(subgraph_name).unwrap();
let mut values = HashMap::new();
values.insert("message".to_string(), value);
let entry_uuid = Uuid::new_v4();
entries.push(EntryData {
name: subgraph_name.clone(),
entry: Entry {
uuid: entry_uuid,
values,
},
});
}
SchemaDatasourceingEvent::Rountrip(_uuid) => {}
},
Err(_e) => {}
},
}
}
Ok::<(), String>(())
})
let _handle = hiro_system_kit::thread_named("subgraph")
.spawn(move || {
let mut observers = vec![];
loop {
let mut selector = Select::new();
let mut handles = vec![];
selector.recv(&subgraph_commands_rx);
for rx in observers.iter() {
handles.push(selector.recv(rx));
}
let oper = selector.select();
match oper.index() {
0 => match oper.recv(&subgraph_commands_rx) {
Err(_e) => {
// todo
}
Ok(cmd) => match cmd {
SubgraphCommand::CreateSubgraph(uuid, config, sender) => {
let mut gql_schema = gql_schema_copy.write().unwrap();
let subgraph_uuid = uuid;
let subgraph_name = config.subgraph_name.clone();
let mut schema =
SchemaDatasourceEntry::new(&subgraph_uuid, &subgraph_name);
for fields in config.fields.iter() {
schema.fields.push(fields.display_name.clone());
}
schema_datasource.add_entry(schema);
gql_schema.replace(new_dynamic_schema(schema_datasource.clone()));
use convert_case::{Case, Casing};
let gql_context = gql_context_copy.write().unwrap();
let mut entries_store = gql_context.entries_store.write().unwrap();
let mut lookup = gql_context.uuid_lookup.write().unwrap();
lookup.insert(
subgraph_uuid.clone(),
subgraph_name.to_case(Case::Camel),
);
entries_store.insert(
subgraph_name.to_case(Case::Camel),
(subgraph_uuid.clone(), vec![]),
);
let _ = sender.send("http://127.0.0.1:8900/graphql".into());
}
SubgraphCommand::ObserveSubgraph(subgraph_observer_rx) => {
observers.push(subgraph_observer_rx);
}
SubgraphCommand::Shutdown => {
let _ = subgraph_events_tx.send(SubgraphEvent::Shutdown);
break; // Break out of the loop to shut down gracefully
}
},
},
i => match oper.recv(&observers[i - 1]) {
Ok(cmd) => match cmd {
SchemaDatasourceingEvent::ApplyEntry(
uuid,
value, /* , request, slot*/
) => {
let gql_context = gql_context_copy.write().unwrap();
let uuid_lookup = gql_context.uuid_lookup.read().unwrap();
let subgraph_name = uuid_lookup.get(&uuid).unwrap();
let mut entries_store = gql_context.entries_store.write().unwrap();
let (_uuid, entries) =
entries_store.get_mut(subgraph_name).unwrap();
let mut values = HashMap::new();
values.insert("message".to_string(), value);
let entry_uuid = Uuid::new_v4();
entries.push(EntryData {
name: subgraph_name.clone(),
entry: Entry {
uuid: entry_uuid,
values,
},
});
}
SchemaDatasourceingEvent::Rountrip(_uuid) => {}
},
Err(_e) => {}
},
}
}
Ok::<(), String>(())
})

Comment on lines +246 to +247
.map_err(|e| format!("{}", e))
.unwrap();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Error handling with .unwrap().
Currently calls .unwrap() after .map_err(...). This can abruptly crash. Consider logging and early return instead.

- .map_err(|e| format!("{}", e))
- .unwrap();
+ .map_err(|e| format!("Failed to spawn file watcher: {}", e))?;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
.map_err(|e| format!("{}", e))
.unwrap();
.map_err(|e| format!("Failed to spawn file watcher: {}", e))?;

Comment on lines +246 to 247
io.extend_with(rpc::admin::SurfpoolAdminRpc.to_delegate());

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Admin RPC extension.
Potentially grants privileged capabilities. Consider requiring authentication or config gating in production.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
crates/core/src/rpc/full.rs (1)

941-949: Consider enhancing test error handling.

The test could benefit from more descriptive error handling. Instead of a generic panic message, consider providing more context about the expected and actual command received.

-            _ => panic!("failed to receive transaction from mempool"),
+            cmd => panic!("Expected SimnetCommand::TransactionReceived, got {:?}", cmd),
crates/core/src/test_helpers.rs (1)

30-31: Consider using bounded channels for better resource management.

The use of unbounded channels (crossbeam_channel::unbounded()) could potentially lead to memory issues if the producers send messages faster than they can be consumed. While this might be acceptable in test scenarios, consider using bounded channels (crossbeam_channel::bounded(size)) to prevent potential memory exhaustion.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1e95232 and 24be23b.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (2)
  • crates/core/src/rpc/full.rs (3 hunks)
  • crates/core/src/test_helpers.rs (4 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: build
  • GitHub Check: run_cargo_checks
🔇 Additional comments (4)
crates/core/src/rpc/full.rs (2)

377-410: LGTM! Enhanced error handling for transaction processing.

The changes improve error reporting by:

  • Separating simulation and execution failures
  • Including detailed error messages with log counts
  • Providing structured error responses through TransactionStatusEvent

412-421: LGTM! Improved transaction confirmation status handling.

The changes provide clearer status handling through structured TransactionStatusEvent::Success events with appropriate confirmation levels (Processed, Confirmed, Finalized).

crates/core/src/test_helpers.rs (2)

20-20: LGTM!

The import of SimnetCommand aligns with the shift from mempool-based to command-based communication.


45-46: LGTM!

The initialization of the new command channels is consistent with the architectural changes described in the PR summary.

Comment on lines +80 to 83
pub fn new_with_mempool(rpc: T, simnet_commands_tx: Sender<SimnetCommand>) -> Self {
let mut setup = TestSetup::new(rpc);
setup.context.mempool_tx = mempool_tx;
setup.context.simnet_commands_tx = simnet_commands_tx;
setup
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Rename method to reflect its current purpose.

The method name new_with_mempool is misleading as it no longer deals with mempool transactions but rather with simnet commands. Consider renaming it to better reflect its current purpose.

Apply this diff to improve clarity:

-pub fn new_with_mempool(rpc: T, simnet_commands_tx: Sender<SimnetCommand>) -> Self {
+pub fn new_with_simnet_commands(rpc: T, simnet_commands_tx: Sender<SimnetCommand>) -> Self {
     let mut setup = TestSetup::new(rpc);
     setup.context.simnet_commands_tx = simnet_commands_tx;
     setup
}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
pub fn new_with_mempool(rpc: T, simnet_commands_tx: Sender<SimnetCommand>) -> Self {
let mut setup = TestSetup::new(rpc);
setup.context.mempool_tx = mempool_tx;
setup.context.simnet_commands_tx = simnet_commands_tx;
setup
pub fn new_with_simnet_commands(rpc: T, simnet_commands_tx: Sender<SimnetCommand>) -> Self {
let mut setup = TestSetup::new(rpc);
setup.context.simnet_commands_tx = simnet_commands_tx;
setup
}

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (2)
crates/cli/src/cli/simnet/mod.rs (1)

415-422: ⚠️ Potential issue

Review logging of deployment status for sensitive information.

The debug log might expose sensitive deployment information. Consider sanitizing or redacting sensitive data before logging.

crates/core/src/simnet/mod.rs (1)

494-600: 🛠️ Refactor suggestion

Add transaction validation and rate limiting.

The transaction processing logic lacks input validation and rate limiting, which could lead to resource exhaustion.

Consider:

  1. Adding input validation for transactions
  2. Implementing rate limiting
  3. Adding metrics for monitoring
♻️ Duplicate comments (2)
crates/cli/src/cli/simnet/mod.rs (1)

246-247: ⚠️ Potential issue

Error handling with .unwrap().

Currently calls .unwrap() after .map_err(...). This can abruptly crash. Consider logging and early return instead.

- .map_err(|e| format!("{}", e))
- .unwrap();
+ .map_err(|e| format!("Failed to spawn file watcher: {}", e))?;
crates/core/src/simnet/mod.rs (1)

255-255: ⚠️ Potential issue

Secure the Admin RPC extension.

The admin RPC extension potentially grants privileged capabilities. Consider requiring authentication or configuration gating in production.

🧹 Nitpick comments (4)
crates/core/src/types.rs (3)

28-39: Add documentation and consider using a more flexible type for Entry values.

The Collection and Entry structs would benefit from documentation explaining their purpose and usage. Additionally, consider using a more flexible type for Entry.values to support different value types.

 #[derive(Debug, Clone)]
+/// Represents a collection of entries with a unique identifier and name.
 pub struct Collection {
     pub uuid: Uuid,
     pub name: String,
     pub entries: Vec<Entry>,
 }

 #[derive(Debug, Clone)]
+/// Represents an entry with a unique identifier and key-value pairs.
 pub struct Entry {
     pub uuid: Uuid,
-    pub values: HashMap<String, String>,
+    pub values: HashMap<String, serde_json::Value>,
 }

51-55: Remove commented code and document the enum.

The SchemaDatasourceingEvent enum has commented-out parameters in the ApplyEntry variant. Either implement these parameters or remove the comment.

 #[derive(Debug, Clone, Deserialize, Serialize)]
+/// Events related to schema datasourcing operations.
 pub enum SchemaDatasourceingEvent {
     Rountrip(Uuid),
-    ApplyEntry(Uuid, String), //, SubgraphRequest, u64),
+    ApplyEntry(Uuid, String),
 }

166-171: Add documentation for SubgraphPluginConfig fields.

The SubgraphPluginConfig struct would benefit from documentation explaining the purpose of each field.

 #[derive(Debug, Clone, Deserialize, Serialize)]
+/// Configuration for a subgraph plugin.
 pub struct SubgraphPluginConfig {
+    /// Unique identifier for the plugin instance.
     pub uuid: Uuid,
+    /// IPC token for inter-process communication.
     pub ipc_token: String,
+    /// Request configuration for the subgraph.
     pub subgraph_request: SubgraphRequest,
 }
crates/cli/src/cli/simnet/mod.rs (1)

88-88: Future-proof the subgraph configuration initialization.

The empty SubgraphConfig might cause issues when fields are added in the future. Consider adding a proper initialization method or builder pattern.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 24be23b and c30bc7a.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (7)
  • crates/cli/src/cli/mod.rs (1 hunks)
  • crates/cli/src/cli/simnet/mod.rs (12 hunks)
  • crates/cli/src/tui/simnet.rs (2 hunks)
  • crates/core/Cargo.toml (2 hunks)
  • crates/core/src/lib.rs (1 hunks)
  • crates/core/src/simnet/mod.rs (14 hunks)
  • crates/core/src/types.rs (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • crates/cli/src/cli/mod.rs
  • crates/cli/src/tui/simnet.rs
  • crates/core/src/lib.rs
  • crates/core/Cargo.toml
⏰ Context from checks skipped due to timeout of 90000ms (2)
  • GitHub Check: run_cargo_checks
  • GitHub Check: build
🔇 Additional comments (1)
crates/core/src/types.rs (1)

150-151: Implement or document the empty SubgraphConfig struct.

The SubgraphConfig struct is currently empty. Either implement the required fields or document why it's intentionally empty.

 #[derive(Clone, Debug, Default)]
+/// Configuration for subgraph functionality.
+/// TODO: Implement required configuration fields.
 pub struct SubgraphConfig {}

Comment on lines +309 to 313
let geyser_plugin_config_file = PathBuf::from("../../surfpool_subgraph_plugin.json");

let contents = "{\"name\": \"surfpool-subgraph\", \"libpath\": \"target/release/libsurfpool_subgraph.dylib\"}";
let result: serde_json::Value = json5::from_str(&contents).unwrap();

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove hardcoded plugin configuration.

The plugin configuration is hardcoded with fixed paths and values. This should be configurable through proper configuration files.

- let geyser_plugin_config_file = PathBuf::from("../../surfpool_subgraph_plugin.json");
- let contents = "{\"name\": \"surfpool-subgraph\", \"libpath\": \"target/release/libsurfpool_subgraph.dylib\"}";
+ let geyser_plugin_config_file = config.plugin_config_path.get(0).ok_or_else(|| {
+     GeyserPluginManagerError::InvalidPluginPath
+ })?;
+ let mut file = File::open(geyser_plugin_config_file)?;
+ let mut contents = String::new();
+ file.read_to_string(&mut contents)?;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let geyser_plugin_config_file = PathBuf::from("../../surfpool_subgraph_plugin.json");
let contents = "{\"name\": \"surfpool-subgraph\", \"libpath\": \"target/release/libsurfpool_subgraph.dylib\"}";
let result: serde_json::Value = json5::from_str(&contents).unwrap();
let geyser_plugin_config_file = config.plugin_config_path.get(0).ok_or_else(|| {
GeyserPluginManagerError::InvalidPluginPath
})?;
let mut file = File::open(geyser_plugin_config_file)?;
let mut contents = String::new();
file.read_to_string(&mut contents)?;
let result: serde_json::Value = json5::from_str(&contents).unwrap();

@lgalabru lgalabru merged commit 948aa12 into main Feb 17, 2025
3 checks passed
@lgalabru lgalabru deleted the feat/subgraph branch February 17, 2025 14:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants