-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: subgraphs #41
feat: subgraphs #41
Conversation
…return tx errors to user
…bgraph # Conflicts: # crates/core/src/simnet/mod.rs
Warning Rate limit exceeded@lgalabru has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 0 minutes and 28 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (10)
WalkthroughThis pull request introduces extensive modifications across the project. Workspace metadata and dependency definitions are updated in multiple Cargo.toml files, adding new packages such as surfpool-gql and surfpool-subgraph. The CLI has been enhanced with a new watch flag and refined event handling, while the HTTP module now includes additional asynchronous GraphQL endpoints and routes. Core modules have been refactored to use workspace dependencies, update RPC interfaces, and restructure simnet command/event handling. New GraphQL types and resolvers have been added, and a subgraph plugin implementing the GeyserPlugin trait is introduced along with its configuration file. Changes
Sequence Diagram(s)sequenceDiagram
participant C as Client
participant HS as HTTP Server (start_server)
participant GR as GraphQL Resolver
participant CTX as GqlContext/SchemaDatasource
C->>HS: HTTP Request (/gql/v1/...)
HS->>CTX: Initialize context and schema
HS->>GR: Route request via post_graphql/get_graphql/subscriptions etc.
GR->>CTX: Resolve query/mutation
CTX-->>GR: Return data or error
GR-->>HS: Return GraphQL response
HS-->>C: Send HTTP response
sequenceDiagram
participant U as User/CLI
participant LS as Core (start_simnet)
participant SM as Simnet Module
participant SC as Subgraph Channel
participant PM as Plugin Manager
U->>LS: Invoke start_simnet with config & command channels
LS->>SM: Forward simnet_commands & subgraph_commands
SM->>SC: Process subgraph events (e.g., TransactionSimulated, PluginLoaded)
SM->>PM: Relay plugin management commands
PM-->>SM: Acknowledge updates
SM-->>LS: Emit simulation events and status
LS->>U: Return simulation status
Possibly related PRs
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Nitpick comments (34)
crates/gql/src/subscription.rs (3)
20-27
: Consider using&str
instead of&String
for parameters.
In many cases in Rust, a string reference (&str
) is preferable over&String
to avoid unnecessary indirection.- pub fn new(name: &String, entry: &Entry) -> Self { + pub fn new(name: &str, entry: &Entry) -> Self {
80-86
: Minor style improvement for parameters.
Similar toEntryData::new
, consider using&str
instead of&String
here as well, unless there's a specific reason to require&String
.
118-137
: Duplicate implementation logic.
DynamicSubscription
appears to replicate theSubscription::entries_event
pattern entirely. If different behavior isn’t planned, consider reducing duplication by reusing shared logic or consolidating into one subscription type.crates/core/src/types.rs (3)
48-52
: Spelling error in the variantRountrip
.
Consider correcting toRoundtrip
for clarity.- Rountrip(Uuid), + Roundtrip(Uuid),
51-51
: Unused comment in variantApplyEntry
.
The commented-out portion hints at an incomplete or planned parameter. Remove or complete it if necessary.- ApplyEntry(Uuid, String), //, SubgraphRequest, u64), + ApplyEntry(Uuid, String),
130-131
:SubgraphConfig
struct is empty.
If more fields are planned, consider marking them asTODO
or removing the struct until needed.crates/gql/src/query.rs (1)
40-49
: Use ofuuid.clone()
Cloning aUuid
is inexpensive but you can often pass it by value asUuid
implementsCopy
. Consider that if performance or cleanliness is a concern.- pub fn new(uuid: &Uuid, name: &str) -> Self { - Self { - name: name.to_case(Case::Pascal), - subgraph_uuid: uuid.clone(), - description: None, - fields: vec![], - } + pub fn new(uuid: Uuid, name: &str) -> Self { + Self { + name: name.to_case(Case::Pascal), + subgraph_uuid: uuid, + description: None, + fields: vec![], + } }crates/subgraph/src/lib.rs (4)
18-24
: Consider adding doc comments for the new struct
Documenting each field's purpose helps improve maintainability and clarity.
30-41
: Improve error handling inon_load
Currently,serde_json::from_str(...).unwrap()
may panic on invalid config. Consider returning a descriptivePluginResult
error instead.-fn on_load(&mut self, config_file: &str, _is_reload: bool) -> PluginResult<()> { - let config = serde_json::from_str::<SubgraphPluginConfig>(&config_file).unwrap(); +fn on_load(&mut self, config_file: &str, _is_reload: bool) -> PluginResult<()> { + let config = serde_json::from_str::<SubgraphPluginConfig>(&config_file) + .map_err(|e| GeyserPluginError::ConfigFileError(e.to_string()))?;
42-43
: Emptyon_unload
method
Please confirm if resource cleanup or final logging is needed to avoid leaks or orphaned resources.
48-67
: Unreachable code inupdate_account
These arms forReplicaAccountInfoVersions::V0_0_1
andV0_0_2
remain unused. If future support is not planned, consider removing them.crates/core/src/rpc/admin.rs (2)
16-119
: Many unimplemented trait methods
Ensure they’re handled or stubbed out before production to prevent runtime issues.
121-247
:load_plugin
method uses unwrap
This can panic if config_file is invalid JSON. Also, consider making the 10-second timeout configurable.crates/cli/src/runbook/mod.rs (1)
1-192
: Enhancements inexecute_runbook
- The runbook state loading logic is thorough.
- Consider robust error handling around dialoguer’s
Confirm::interact()
to avoid potential panics.crates/cli/src/http/mod.rs (4)
14-21
: Possible naming confusion.
The event nameSchemaDatasourceingEvent
might be incomplete or a leftover spelling. Consider renaming it to something likeSchemaDataSourcingEvent
or another more descriptive name.
32-36
: Unused_ctx
parameter.
The_ctx
parameter is defined but not used in the function body. If not required, consider removing or renaming it.
39-47
: Consider removing theOption
wrapper around the schema.
Currently, it is always set toSome(...)
. If you do not expectNone
, one can simplify the logic by removingOption
.- let schema = RwLock::new(Some(new_dynamic_schema(schema_datasource.clone()))); + let schema = RwLock::new(new_dynamic_schema(schema_datasource.clone()));
228-245
: Configurable keep-alive interval.
You're hardcoding a 15-second interval for WebSocket keep-alive. Expose this setting for production flexibility if necessary.crates/cli/src/cli/simnet/mod.rs (6)
21-24
: File watching withnotify
.
Usingnotify
is a robust approach but be mindful of potential performance overhead when watching large directories.
100-103
: Unbounded channels usage.
crossbeam::channel::unbounded()
risks unbounded memory growth if traffic is high. Consider a bounded channel if load might spike.
104-104
: Port override.
DEFAULT_EXPLORER_PORT
is hardcoded. Provide a configuration option if multiple services share ports or if user needs to override.
187-188
: File watcher thread.
One-second polling might be fine for small changes, but consider a custom interval or event debouncing for heavier usage.
188-200
: Comparing file contents.
with_compare_contents(true)
can be expensive for big files. Evaluate if filename-based triggers suffice.
252-252
: Cleanup placeholder.
Line comment indicates state cleanup. Ensure finalizing steps actually occur, if needed.crates/core/src/simnet/mod.rs (4)
270-303
: Plugin config path logic.
Much code is commented out. If you plan to parse from JSON, handle all edge cases thoroughly.
486-495
:TransactionReceived
command.
The transaction is stored underEntryStatus::Received
. Confirm if duplicates or replayed transactions should be handled differently.
541-558
: Error logging.
Logging all simulation failures is good. Consider including additional detail about the failing instruction or relevant account for easier debugging.
601-604
: Post-processing.
Marking transactions asSuccess(Confirmed)
is accurate. Consider additional states if final irreversibility differs from local confirmation.crates/core/src/rpc/full.rs (3)
378-384
: Check for potential error fromsimnet_commands_tx.send
.The result of
send(...)
is ignored, which could mask channel closure errors or concurrency issues.Consider handling the
send
result:- let _ = ctx.simnet_commands_tx.send(...) + if let Err(e) = ctx.simnet_commands_tx.send(...) { + log::error!("Failed to send TransactionReceived command: {:?}", e); + }
387-398
: Consolidate repeated error handling forSimulationFailure
.Error formatting for simulation failures (logs and message text) largely duplicates the logic in the execution failure branch. Consider extracting a helper function to maintain consistency and reduce redundancy.
399-410
: Consolidate repeated error handling forExecutionFailure
.This block closely mirrors the simulation failure handling. A shared helper could streamline both branches and ensure synchronized enhancements if error reporting evolves later.
crates/gql/src/mutation.rs (1)
4-13
: Consider referencing the version from a single constant.Returning
"1.0"
directly works, but if the API version changes in multiple places, it might be easy to miss an update. Using a constant or configuration variable would improve maintainability.crates/gql/src/lib.rs (1)
14-19
: Consider making the broadcast channel size configurable.The broadcast channel size is hardcoded to 128 in the constructor. Consider making this configurable to allow tuning based on load and requirements.
#[derive(Clone, Debug)] pub struct Context { pub uuid_lookup: Arc<RwLock<BTreeMap<Uuid, String>>>, pub entries_store: Arc<RwLock<BTreeMap<String, (Uuid, Vec<EntryData>)>>>, - pub entries_broadcaster: tokio::sync::broadcast::Sender<EntryUpdate>, + pub entries_broadcaster: tokio::sync::broadcast::Sender<EntryUpdate>, + pub broadcast_channel_size: usize, }Cargo.toml (1)
21-66
: Unified Dependency Management via Workspace DependenciesA large number of dependencies—especially the Solana-related ones—are now either pinned to version
"2.1.10"
or routed via workspace references. This approach centralizes dependency control and enforces version consistency across modules.
Consider reviewing whether the hard pinning to"2.1.10"
is ideal for future maintainability or if a more flexible versioning strategy (or a central version variable) might be beneficial.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (22)
Cargo.toml
(2 hunks)crates/cli/Cargo.toml
(2 hunks)crates/cli/src/cli/mod.rs
(1 hunks)crates/cli/src/cli/simnet/mod.rs
(13 hunks)crates/cli/src/http/mod.rs
(4 hunks)crates/cli/src/runbook/mod.rs
(5 hunks)crates/cli/src/tui/simnet.rs
(3 hunks)crates/core/Cargo.toml
(2 hunks)crates/core/src/lib.rs
(1 hunks)crates/core/src/rpc/admin.rs
(1 hunks)crates/core/src/rpc/full.rs
(2 hunks)crates/core/src/rpc/mod.rs
(4 hunks)crates/core/src/simnet/mod.rs
(13 hunks)crates/core/src/types.rs
(4 hunks)crates/gql/Cargo.toml
(1 hunks)crates/gql/src/lib.rs
(1 hunks)crates/gql/src/mutation.rs
(1 hunks)crates/gql/src/query.rs
(1 hunks)crates/gql/src/subscription.rs
(1 hunks)crates/subgraph/Cargo.toml
(1 hunks)crates/subgraph/src/lib.rs
(1 hunks)surfpool_subgraph_plugin.json
(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- surfpool_subgraph_plugin.json
- crates/subgraph/Cargo.toml
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: build
🔇 Additional comments (93)
crates/gql/src/subscription.rs (5)
14-18
: Struct definition looks good.
This struct straightforwardly captures theEntry
and its name.
29-47
: Check correctness of generated GraphQL type name and fields.
Ensuring thename(spec: &SchemaDatasourceEntry)
function returns the intended entity name is crucial to avoid naming collisions in the GraphQL schema. The approach to pushing fields also looks correct for dynamic fields.
74-79
: Struct definition forEntryUpdate
is consistent.
It mirrors the structure ofEntryData
and sets up clarity for separate upstream usage.
89-95
: GraphQL object forEntryUpdate
is concise.
Theuuid
resolver matches the typical read-only accessor pattern.
96-115
: Infinite stream loop can block graceful shutdown.
This loop will run indefinitely and yield new events until the service stops. Ensure there's a plan to break out or handle channel closure so the thread or task doesn’t block indefinitely.crates/core/src/types.rs (12)
25-30
: StructCollection
looks solid.
Capturing multiple entries under a shared UUID and name is straightforward.
32-36
:Entry
struct is clear.
Maintaining a UUID plus a dynamic map of key-value pairs is a flexible design.
38-46
: Event variants are well-defined.
Log events and a shutdown variant adequately represent possible subgraph events.
54-59
: SubgraphCommand usage.
The command variants for create, observe, and shutdown are comprehensive.
62-76
:SimnetEvent
enumerations are well-structured.
They cover a variety of simulation states, logs, and updates.
78-82
:TransactionStatusEvent
usage is clear.
It neatly distinguishes success, simulation failure, and execution failure with relevant data.
84-95
:SimnetCommand
is comprehensive.
Includes slot manipulation, clock updates, and transaction handling.
97-99
:PluginManagerCommand
loading logic.
Straightforward design for asynchronous plugin loading via channels.
101-106
:ClockCommand
states are minimal but effective.
The variantsPause
,Resume
,Toggle
, and specifying slot intervals handle typical clock scenarios.
108-111
:ClockEvent
enumerations are basic and sufficient.
Provides aTick
event and an expiration event for blockhash.
113-117
: Addingsubgraph
toSurfpoolConfig
.
Expanding config to store subgraph settings is aligned with the rest of these changes.
146-151
:SubgraphPluginConfig
Captures essential plugin config parameters effectively.crates/gql/src/query.rs (8)
12-13
:Query
struct is a straightforward placeholder.
Serves as a root query object for the GraphQL schema.
15-30
:SchemaDatasource
Captures a map of entries. Theadd_entry
method usesto_case(Case::Camel)
which might be worth verifying if you intend consistent naming everywhere.
32-38
:SchemaDatasourceEntry
definition is concise.
The fieldsname
,subgraph_uuid
,description
, andfields
provide a clear structure for each schema entry.
51-69
: Check dynamic field creation inmeta()
.
Iterating overspec.fields
and building GraphQL fields is an effective approach for dynamic schema generation.
71-78
:GraphQLValue
implementation.
The type referencesSchemaDatasourceEntry
itself, which is consistent with how the metadata is being built.
80-84
:Query::new()
is standard.
Creates an empty root query object, an acceptable pattern for frameworks like Juniper.
86-105
: GraphQL type definition forQuery
.
Defining theapiVersion
field is a neat addition, and pushing dynamic fields for subgraph entries is on track.
107-114
:GraphQLValue
forQuery
Simply references the sameSchemaDatasource
info. The design is consistent.crates/subgraph/src/lib.rs (11)
1-16
: All imports appear consistent with plugin usage
No issues found in these lines.
25-29
: Plugin name method is straightforward
No further feedback needed.
44-47
: Emptynotify_end_of_startup
method
No concerns if this is intentionally a no-op for now.
69-76
: Slot status update is minimal
This placeholder may suffice initially, but confirm if slot transitions also require logging or other processing.
154-156
: Emptynotify_entry
No concerns if future logic will be added later.
158-160
: Emptynotify_block_metadata
No concerns if future logic will be added later.
162-165
: Account data notifications enabled
This aligns with the implementedupdate_account
functionality.
166-169
: Transaction notifications are disabled
Yet there's extensive logic innotify_transaction
. Verify if this is the intended behavior.
170-173
: Entry notifications disabled
Likely fine if not needed by this plugin.
175-177
: Attributes for dynamic loading
#[no_mangle]
and#[allow(improper_ctypes_definitions)]
are standard for plugin exports.
178-184
: Plugin creation function
Implementation appears correct for returning a validGeyserPlugin
pointer.crates/core/src/rpc/admin.rs (1)
1-15
: Imports for AdminRpc
No concerns here; these dependencies appear to be correctly declared.crates/cli/src/runbook/mod.rs (1)
194-274
:display_snapshot_diffing
function
Provides a clear, categorized view of changes and potential re-executions.
Implementation is well-structured; no major issues observed.crates/cli/src/http/mod.rs (6)
5-10
: All new imports are appropriate.
No immediate issues are present.
147-151
: GraphQL routing addition looks good.
Defining GET/POST/subscription routes under/gql/v1
is clear and maintains RESTful separation.
197-211
: Potential.unwrap()
panic inpost_graphql
.
schema.as_ref().unwrap()
can cause a runtime panic if theOption
isNone
. Validate or handle this scenario more gracefully.
213-226
: Check.unwrap()
usage inget_graphql
.
Same concern as inpost_graphql
. Ifschema
isNone
,.unwrap()
will panic.
247-249
: Playground route.
Having a dedicated route for interactive API exploration is helpful.
251-253
: GraphiQL route.
This route mirrors the Playground approach; nothing problematic stands out.crates/cli/src/cli/simnet/mod.rs (9)
2-2
: New imports.
AddingPathBuf
is standard; no problems evident.
6-6
: mpsc and Arc.
Looks consistent with concurrency usage in the rest of the code.
33-35
: Extended types.
UsingSimnetEvent
,SubgraphConfig
, andSubgraphEvent
is consistent with the new subgraph integration. Confirm all references are updated across the codebase.
88-88
: EmptySubgraphConfig
.
SubgraphConfig {}
initializes with no fields. Ensure your logic doesn’t require initial fields for subgraph usage.
105-114
: Server startup logic.
The error handling with.await
andmap_err
is practical. No issues spotted.
116-118
: Cloning context and config.
Straightforward duplication for thread usage; no concerns.
156-157
: Deploy progress channel.
Another unbounded channel for progress messages. This is acceptable for smaller projects; just keep in mind potential queue growth.
256-262
: Unified event logging.
Capturing simnet and subgraph events in the same logger is a good approach for consolidated debugging.
275-277
: Stopping the explorer handle.
explorer_handle.stop(true)
forcibly halts the server. Confirm you want an immediate stop if active requests exist.crates/core/src/simnet/mod.rs (13)
5-5
: Chrono usage.
AdoptingLocal
andUtc
is valid for consistent timestamps. No issues.
8-11
: ipc_channel additions.
Provides advanced IPC. Check cross-platform compatibility and test thoroughly.
25-25
: Transaction version.
Explicitly referencingTransactionVersion
clarifies multi-version transaction support.
44-45
: Re-exports forSurfpoolMiddleware
.
Good approach to keep the codebase structured. No immediate issues.
46-49
: Expanded references.
LinkingSimnetCommand
,SimnetEvent
, andSubgraphCommand
fosters synergy between simnet and subgraph logic.
188-188
: PathBuf usage.
Ensure path references are sanitized and validated if controlled by the user.
194-194
:start(...)
signature extended.
Accepting subgraph-related channels is logical for plugin integration.
227-228
: Plugin manager commands channel.
Ensure concurrency usage is thoroughly tested, especially with multiple plugin loads.
232-233
: Crossbeam channels in middleware.
Attaching both simnet and plugin commands is a known concurrency pattern.
265-265
: RouterProxy usage.
ipc_router = RouterProxy::new()
is powerful for routing. Evaluate performance and potential security boundaries.
344-355
: Subgraph plugin loading.
After dynamic loading, you dispatch aSubgraphCommand
. Consider verifying the plugin is ready before usage.Also applies to: 357-357
509-559
: Conditional preflight checks.
Great for flexibility. Ifskip_preflight
is on, be sure to handle or log invalid instructions that might appear later.
560-594
: Transaction execution flow.
Plugins are updated on success; errors are also logged. Verify that partial plugin states don’t persist if plugin handling fails mid-transaction.crates/core/src/rpc/full.rs (3)
2-2
: Seems fine.The new import of
TransactionStatusEvent
accurately reflects its usage for transaction status handling.
4-4
: ImportingItertools
for log message joining is acceptable.No concerns; using
join
is a clear and concise approach.
412-412
: Ensure no unhandled success statuses remain.If future commitment levels or new success statuses are introduced, this loop may continue indefinitely. Verify that all success modes are covered or add a default case to avoid unexpected hangs.
Also applies to: 416-416, 420-420
crates/gql/src/mutation.rs (2)
1-2
: Imports look good.The
Context
import is critical for the#[graphql_object]
annotations, andjuniper_codegen
usage is expected.
15-24
: Parallel structure withMutation
is consistent.Everything looks good here. Keeping the same
api_version
approach across both mutations is logical.crates/core/src/lib.rs (2)
24-24
: Imports aggregated intotypes
are consistent.No issues with grouping
SimnetCommand
,SimnetEvent
,SubgraphCommand
, andSurfpoolConfig
.
28-28
: Refined function signature forstart_simnet
.Adding
subgraph_commands_tx
andsimnet_commands_tx
broadens the function’s responsibilities. Verify that channel usage is correctly orchestrated and well-tested to avoid deadlocks or dropped messages.Also applies to: 30-30, 33-40
crates/gql/src/lib.rs (4)
12-12
: Verify if the commented-out module is needed.The
types
module is commented out. Please verify if this module should be uncommented or removed.
21-30
: LGTM!The Context implementation is clean and initializes all fields appropriately.
32-32
: LGTM!The empty implementation of juniper::Context is appropriate as it's a marker trait.
34-45
: LGTM!The schema type and constructor are well-implemented following Juniper's best practices.
crates/core/src/rpc/mod.rs (3)
28-33
: LGTM!The RunloopContext has been appropriately updated to use command-based communication, aligning with the broader restructuring of the codebase.
84-89
: LGTM!The SurfpoolMiddleware has been consistently updated to match the RunloopContext changes.
76-79
: LGTM!The imports have been appropriately updated to support the new command-based architecture.
crates/cli/src/cli/mod.rs (1)
109-111
: LGTM!The watch flag is well-documented and properly configured with appropriate clap attributes.
crates/cli/src/tui/simnet.rs (3)
21-21
: LGTM!The import path has been correctly updated to reflect the new module structure.
236-242
: LGTM!The PluginLoaded event handler is well-implemented with appropriate logging.
281-281
: LGTM!The event has been appropriately renamed to better reflect its purpose.
crates/gql/Cargo.toml (2)
1-10
: Workspace Metadata Setup Looks SolidAll the package metadata keys (name, description, version, edition, license, repository, keywords, categories) are configured to inherit from the workspace. This consistent use of workspace settings simplifies maintenance across the project.
13-24
: Clear and Consistent Dependency DeclarationsThe dependencies are defined with explicit versions and feature flags where needed. Using workspace references (e.g., for
surfpool-core
) ensures consistency with other modules. The commented-out entries (e.g., fortxtx-addon-kit
and an alternativejuniper_codegen
specification) are noted—make sure they are kept only as references if not needed.crates/cli/Cargo.toml (2)
18-21
: Integrating Workspace Dependencies for CLIThe addition of
surfpool-gql
and the update of bothtxtx-core
andtxtx-addon-network-svm
to use workspace references aligns well with the modular, workspace-centric approach. Please verify that these changes integrate smoothly with the rest of the build.
46-52
: New Dependencies for Enhanced FunctionalityThe inclusion of dependencies like
notify
,juniper_actix
,juniper_graphql_ws
, andjuniper
along with workspace-referenced packages such asipc-channel
andbincode
is well-defined. Double-check that the specified version numbers and feature flags (e.g.,"subscriptions"
forjuniper_actix
and"graphql-transport-ws"
forjuniper_graphql_ws
) meet your use-case requirements.Cargo.toml (2)
1-4
: Descriptive Workspace Package Information AddedThe new
description
field in the[workspace.package]
section provides a concise overview of the project. This is a helpful addition for documentation and new contributors.
11-17
: Updated Workspace MembersThe updated
members
array now includes"crates/gql"
and"crates/subgraph"
, which accurately reflects the project’s expanded modular structure. Ensure that these new modules have been properly configured and tested within the workspace context.crates/core/Cargo.toml (3)
17-23
: Transition to Workspace Dependencies for Core FunctionalityUpdating dependencies such as
bincode
,serde
,serde_derive
, andserde_json
to use workspace references enhances consistency in dependency management. Please ensure that the workspace-level definitions properly resolve these libraries.
41-71
: Extensive Workspace Dependency Updates for Solana-Related ModulesThe migration of numerous Solana-related dependencies (e.g.,
solana-sdk
,solana-program-test
,solana-client
, etc.) to workspace references underscores a move toward centralized version management. Verify that all dependent modules and integration tests compile correctly against these workspace dependencies.
77-79
: Addition of Further DependenciesThe inclusion of
txtx-addon-network-svm
,uuid
, andipc-channel
with workspace references further unifies the dependency management strategy insurfpool-core
. As with other modules, please confirm that these changes work harmoniously within the overall architecture.
crates/gql/src/subscription.rs
Outdated
impl GraphQLValue<DefaultScalarValue> for EntryData { | ||
type Context = Context; | ||
type TypeInfo = SchemaDatasourceEntry; | ||
|
||
fn type_name<'i>(&self, info: &'i Self::TypeInfo) -> Option<&'i str> { | ||
<EntryData as GraphQLType>::name(info) | ||
} | ||
|
||
fn resolve_field( | ||
&self, | ||
info: &SchemaDatasourceEntry, | ||
field_name: &str, | ||
args: &Arguments, | ||
executor: &Executor<Context>, | ||
) -> ExecutionResult { | ||
match field_name { | ||
"uuid" => executor.resolve_with_ctx(&(), &self.entry.uuid.to_string()), | ||
field_name => { | ||
let value = self.entry.values.get(field_name).unwrap(); | ||
executor.resolve_with_ctx(&(), value) | ||
} | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid potential panic on missing keys.
Line 67 unwraps a value from self.entry.values
without handling the case where field_name
might be absent, which can cause a server panic. Consider returning a FieldError
or a default value instead.
- let value = self.entry.values.get(field_name).unwrap();
+ let value = match self.entry.values.get(field_name) {
+ Some(v) => v,
+ None => return Err(FieldError::new(
+ format!("Field '{}' not found in entry values", field_name),
+ graphql_value!({ "invalid_field": field_name })
+ ))
+ };
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
impl GraphQLValue<DefaultScalarValue> for EntryData { | |
type Context = Context; | |
type TypeInfo = SchemaDatasourceEntry; | |
fn type_name<'i>(&self, info: &'i Self::TypeInfo) -> Option<&'i str> { | |
<EntryData as GraphQLType>::name(info) | |
} | |
fn resolve_field( | |
&self, | |
info: &SchemaDatasourceEntry, | |
field_name: &str, | |
args: &Arguments, | |
executor: &Executor<Context>, | |
) -> ExecutionResult { | |
match field_name { | |
"uuid" => executor.resolve_with_ctx(&(), &self.entry.uuid.to_string()), | |
field_name => { | |
let value = self.entry.values.get(field_name).unwrap(); | |
executor.resolve_with_ctx(&(), value) | |
} | |
} | |
} | |
} | |
impl GraphQLValue<DefaultScalarValue> for EntryData { | |
type Context = Context; | |
type TypeInfo = SchemaDatasourceEntry; | |
fn type_name<'i>(&self, info: &'i Self::TypeInfo) -> Option<&'i str> { | |
<EntryData as GraphQLType>::name(info) | |
} | |
fn resolve_field( | |
&self, | |
info: &SchemaDatasourceEntry, | |
field_name: &str, | |
args: &Arguments, | |
executor: &Executor<Context>, | |
) -> ExecutionResult { | |
match field_name { | |
"uuid" => executor.resolve_with_ctx(&(), &self.entry.uuid.to_string()), | |
field_name => { | |
- let value = self.entry.values.get(field_name).unwrap(); | |
+ let value = match self.entry.values.get(field_name) { | |
+ Some(v) => v, | |
+ None => return Err(FieldError::new( | |
+ format!("Field '{}' not found in entry values", field_name), | |
+ graphql_value!({ "invalid_field": field_name }) | |
+ )) | |
+ }; | |
executor.resolve_with_ctx(&(), value) | |
} | |
} | |
} | |
} |
impl GraphQLValueAsync<DefaultScalarValue> for Query { | ||
fn resolve_field_async( | ||
&self, | ||
info: &SchemaDatasource, | ||
field_name: &str, | ||
_arguments: &Arguments, | ||
executor: &Executor<Context>, | ||
) -> BoxFuture<ExecutionResult> { | ||
let res = match field_name { | ||
"apiVersion" => executor.resolve_with_ctx(&(), "1.0"), | ||
subgraph_name => { | ||
let database = executor.context(); | ||
let subgraph_db = database.entries_store.read().unwrap(); | ||
let entry = info.entries.get(subgraph_name).unwrap(); | ||
let (_, entries) = subgraph_db.get(subgraph_name).unwrap(); | ||
executor.resolve_with_ctx(entry, &entries[..]) | ||
} | ||
}; | ||
Box::pin(async { res }) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid unwrap()
calls for store lookups.
Lines 129–130 could panic if the store is missing a key or if the read lock is poisoned. Consider returning an error result or logging gracefully instead of using unwrap()
.
- let (_, entries) = subgraph_db.get(subgraph_name).unwrap();
+ let entries = match subgraph_db.get(subgraph_name) {
+ Some((_, e)) => e,
+ None => return Err(juniper::FieldError::new(
+ "Subgraph not found",
+ graphql_value!({ "requested_subgraph": subgraph_name })
+ )),
+ };
Committable suggestion skipped: line range outside the PR's diff.
crates/subgraph/src/lib.rs
Outdated
fn notify_transaction( | ||
&self, | ||
transaction: ReplicaTransactionInfoVersions, | ||
slot: Slot, | ||
) -> PluginResult<()> { | ||
let Ok(tx) = self.subgraph_indexing_event_tx.lock() else { | ||
return Ok(()); | ||
}; | ||
let tx = tx.as_ref().unwrap(); | ||
let Some(ref subgraph_request) = self.subgraph_request else { | ||
return Ok(()); | ||
}; | ||
match transaction { | ||
ReplicaTransactionInfoVersions::V0_0_2(data) => { | ||
let _ = tx.send(SchemaDatasourceingEvent::ApplyEntry( | ||
self.uuid, | ||
data.signature.to_string(), | ||
// subgraph_request.clone(), | ||
// slot, | ||
)); | ||
if data.is_vote { | ||
return Ok(()); | ||
} | ||
let Some(ref inner_instructions) = data.transaction_status_meta.inner_instructions | ||
else { | ||
return Ok(()); | ||
}; | ||
for inner_instructions in inner_instructions.iter() { | ||
for instruction in inner_instructions.instructions.iter() { | ||
let instruction = &instruction.instruction; | ||
let decoded_data = bs58::decode(&instruction.data) | ||
.into_vec() | ||
.map_err(|e| GeyserPluginError::Custom(Box::new(e)))?; | ||
// it's not valid cpi event data if there isn't an 8-byte signature | ||
if decoded_data.len() < 8 { | ||
continue; | ||
} | ||
let eight_bytes = decoded_data[0..8].to_vec(); | ||
let decoded_signature = bs58::decode(eight_bytes) | ||
.into_vec() | ||
.map_err(|e| GeyserPluginError::Custom(Box::new(e)))?; | ||
for field in subgraph_request.fields.iter() { | ||
match &field.data_source { | ||
IndexedSubgraphSourceType::Instruction( | ||
instruction_subgraph_source, | ||
) => { | ||
continue; | ||
} | ||
IndexedSubgraphSourceType::Event(event_subgraph_source) => { | ||
if event_subgraph_source | ||
.event | ||
.discriminator | ||
.eq(decoded_signature.as_slice()) | ||
{ | ||
println!( | ||
"found event with match!!!: {:?}", | ||
event_subgraph_source.event.name | ||
); | ||
let _ = tx.send(SchemaDatasourceingEvent::ApplyEntry( | ||
self.uuid, | ||
data.signature.to_string(), | ||
// subgraph_request.clone(), | ||
// slot, | ||
)); | ||
} | ||
} | ||
} | ||
} | ||
} | ||
} | ||
} | ||
ReplicaTransactionInfoVersions::V0_0_1(_) => {} | ||
} | ||
Ok(()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Potential mismatch & concurrency considerations in notify_transaction
- The method is fully implemented, but
transaction_notifications_enabled()
returns false. - There's a double base58 decode (lines 108 & 116) which could be unintentional.
- Consider explicit handling of
Mutex
lock failures rather than silently returningOk(())
.
crates/cli/src/http/mod.rs
Outdated
let _handle = hiro_system_kit::thread_named("subgraph") | ||
.spawn(move || { | ||
let mut observers = vec![]; | ||
loop { | ||
let mut selector = Select::new(); | ||
let mut handles = vec![]; | ||
selector.recv(&subgraph_commands_rx); | ||
for rx in observers.iter() { | ||
handles.push(selector.recv(rx)); | ||
} | ||
let oper = selector.select(); | ||
match oper.index() { | ||
0 => match oper.recv(&subgraph_commands_rx) { | ||
Err(_e) => { | ||
// todo | ||
} | ||
Ok(cmd) => match cmd { | ||
SubgraphCommand::CreateSubgraph(uuid, config, sender) => { | ||
let mut gql_schema = gql_schema_copy.write().unwrap(); | ||
let subgraph_uuid = uuid; | ||
let subgraph_name = config.subgraph_name.clone(); | ||
let mut schema = | ||
SchemaDatasourceEntry::new(&subgraph_uuid, &subgraph_name); | ||
for fields in config.fields.iter() { | ||
schema.fields.push(fields.display_name.clone()); | ||
} | ||
schema_datasource.add_entry(schema); | ||
gql_schema.replace(new_dynamic_schema(schema_datasource.clone())); | ||
use convert_case::{Case, Casing}; | ||
|
||
let gql_context = gql_context_copy.write().unwrap(); | ||
let mut entries_store = gql_context.entries_store.write().unwrap(); | ||
let mut lookup = gql_context.uuid_lookup.write().unwrap(); | ||
lookup.insert( | ||
subgraph_uuid.clone(), | ||
subgraph_name.to_case(Case::Camel), | ||
); | ||
entries_store.insert( | ||
subgraph_name.to_case(Case::Camel), | ||
(subgraph_uuid.clone(), vec![]), | ||
); | ||
let _ = sender.send("http://127.0.0.1:8900/graphql".into()); | ||
} | ||
SubgraphCommand::ObserveSubgraph(subgraph_observer_rx) => { | ||
observers.push(subgraph_observer_rx); | ||
} | ||
SubgraphCommand::Shutdown => { | ||
let _ = subgraph_events_tx.send(SubgraphEvent::Shutdown); | ||
} | ||
}, | ||
}, | ||
i => match oper.recv(&observers[i - 1]) { | ||
Ok(cmd) => match cmd { | ||
SchemaDatasourceingEvent::ApplyEntry( | ||
uuid, | ||
value, /* , request, slot*/ | ||
) => { | ||
let gql_context = gql_context_copy.write().unwrap(); | ||
let uuid_lookup = gql_context.uuid_lookup.read().unwrap(); | ||
let subgraph_name = uuid_lookup.get(&uuid).unwrap(); | ||
let mut entries_store = gql_context.entries_store.write().unwrap(); | ||
let (_uuid, entries) = | ||
entries_store.get_mut(subgraph_name).unwrap(); | ||
let mut values = HashMap::new(); | ||
values.insert("message".to_string(), value); | ||
let entry_uuid = Uuid::new_v4(); | ||
entries.push(EntryData { | ||
name: subgraph_name.clone(), | ||
entry: Entry { | ||
uuid: entry_uuid, | ||
values, | ||
}, | ||
}); | ||
} | ||
SchemaDatasourceingEvent::Rountrip(_uuid) => {} | ||
}, | ||
Err(_e) => {} | ||
}, | ||
} | ||
} | ||
Ok::<(), String>(()) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thread loop continues after receiving shutdown signal.
When SubgraphCommand::Shutdown
is received, you send SubgraphEvent::Shutdown
but don’t break out of the loop. Consider breaking out to terminate gracefully.
- SubgraphCommand::Shutdown => {
- let _ = subgraph_events_tx.send(SubgraphEvent::Shutdown);
- }
+ SubgraphCommand::Shutdown => {
+ let _ = subgraph_events_tx.send(SubgraphEvent::Shutdown);
+ break; // Break out of the loop to shut down gracefully
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let _handle = hiro_system_kit::thread_named("subgraph") | |
.spawn(move || { | |
let mut observers = vec![]; | |
loop { | |
let mut selector = Select::new(); | |
let mut handles = vec![]; | |
selector.recv(&subgraph_commands_rx); | |
for rx in observers.iter() { | |
handles.push(selector.recv(rx)); | |
} | |
let oper = selector.select(); | |
match oper.index() { | |
0 => match oper.recv(&subgraph_commands_rx) { | |
Err(_e) => { | |
// todo | |
} | |
Ok(cmd) => match cmd { | |
SubgraphCommand::CreateSubgraph(uuid, config, sender) => { | |
let mut gql_schema = gql_schema_copy.write().unwrap(); | |
let subgraph_uuid = uuid; | |
let subgraph_name = config.subgraph_name.clone(); | |
let mut schema = | |
SchemaDatasourceEntry::new(&subgraph_uuid, &subgraph_name); | |
for fields in config.fields.iter() { | |
schema.fields.push(fields.display_name.clone()); | |
} | |
schema_datasource.add_entry(schema); | |
gql_schema.replace(new_dynamic_schema(schema_datasource.clone())); | |
use convert_case::{Case, Casing}; | |
let gql_context = gql_context_copy.write().unwrap(); | |
let mut entries_store = gql_context.entries_store.write().unwrap(); | |
let mut lookup = gql_context.uuid_lookup.write().unwrap(); | |
lookup.insert( | |
subgraph_uuid.clone(), | |
subgraph_name.to_case(Case::Camel), | |
); | |
entries_store.insert( | |
subgraph_name.to_case(Case::Camel), | |
(subgraph_uuid.clone(), vec![]), | |
); | |
let _ = sender.send("http://127.0.0.1:8900/graphql".into()); | |
} | |
SubgraphCommand::ObserveSubgraph(subgraph_observer_rx) => { | |
observers.push(subgraph_observer_rx); | |
} | |
SubgraphCommand::Shutdown => { | |
let _ = subgraph_events_tx.send(SubgraphEvent::Shutdown); | |
} | |
}, | |
}, | |
i => match oper.recv(&observers[i - 1]) { | |
Ok(cmd) => match cmd { | |
SchemaDatasourceingEvent::ApplyEntry( | |
uuid, | |
value, /* , request, slot*/ | |
) => { | |
let gql_context = gql_context_copy.write().unwrap(); | |
let uuid_lookup = gql_context.uuid_lookup.read().unwrap(); | |
let subgraph_name = uuid_lookup.get(&uuid).unwrap(); | |
let mut entries_store = gql_context.entries_store.write().unwrap(); | |
let (_uuid, entries) = | |
entries_store.get_mut(subgraph_name).unwrap(); | |
let mut values = HashMap::new(); | |
values.insert("message".to_string(), value); | |
let entry_uuid = Uuid::new_v4(); | |
entries.push(EntryData { | |
name: subgraph_name.clone(), | |
entry: Entry { | |
uuid: entry_uuid, | |
values, | |
}, | |
}); | |
} | |
SchemaDatasourceingEvent::Rountrip(_uuid) => {} | |
}, | |
Err(_e) => {} | |
}, | |
} | |
} | |
Ok::<(), String>(()) | |
}) | |
let _handle = hiro_system_kit::thread_named("subgraph") | |
.spawn(move || { | |
let mut observers = vec![]; | |
loop { | |
let mut selector = Select::new(); | |
let mut handles = vec![]; | |
selector.recv(&subgraph_commands_rx); | |
for rx in observers.iter() { | |
handles.push(selector.recv(rx)); | |
} | |
let oper = selector.select(); | |
match oper.index() { | |
0 => match oper.recv(&subgraph_commands_rx) { | |
Err(_e) => { | |
// todo | |
} | |
Ok(cmd) => match cmd { | |
SubgraphCommand::CreateSubgraph(uuid, config, sender) => { | |
let mut gql_schema = gql_schema_copy.write().unwrap(); | |
let subgraph_uuid = uuid; | |
let subgraph_name = config.subgraph_name.clone(); | |
let mut schema = | |
SchemaDatasourceEntry::new(&subgraph_uuid, &subgraph_name); | |
for fields in config.fields.iter() { | |
schema.fields.push(fields.display_name.clone()); | |
} | |
schema_datasource.add_entry(schema); | |
gql_schema.replace(new_dynamic_schema(schema_datasource.clone())); | |
use convert_case::{Case, Casing}; | |
let gql_context = gql_context_copy.write().unwrap(); | |
let mut entries_store = gql_context.entries_store.write().unwrap(); | |
let mut lookup = gql_context.uuid_lookup.write().unwrap(); | |
lookup.insert( | |
subgraph_uuid.clone(), | |
subgraph_name.to_case(Case::Camel), | |
); | |
entries_store.insert( | |
subgraph_name.to_case(Case::Camel), | |
(subgraph_uuid.clone(), vec![]), | |
); | |
let _ = sender.send("http://127.0.0.1:8900/graphql".into()); | |
} | |
SubgraphCommand::ObserveSubgraph(subgraph_observer_rx) => { | |
observers.push(subgraph_observer_rx); | |
} | |
SubgraphCommand::Shutdown => { | |
let _ = subgraph_events_tx.send(SubgraphEvent::Shutdown); | |
break; // Break out of the loop to shut down gracefully | |
} | |
}, | |
}, | |
i => match oper.recv(&observers[i - 1]) { | |
Ok(cmd) => match cmd { | |
SchemaDatasourceingEvent::ApplyEntry( | |
uuid, | |
value, /* , request, slot*/ | |
) => { | |
let gql_context = gql_context_copy.write().unwrap(); | |
let uuid_lookup = gql_context.uuid_lookup.read().unwrap(); | |
let subgraph_name = uuid_lookup.get(&uuid).unwrap(); | |
let mut entries_store = gql_context.entries_store.write().unwrap(); | |
let (_uuid, entries) = | |
entries_store.get_mut(subgraph_name).unwrap(); | |
let mut values = HashMap::new(); | |
values.insert("message".to_string(), value); | |
let entry_uuid = Uuid::new_v4(); | |
entries.push(EntryData { | |
name: subgraph_name.clone(), | |
entry: Entry { | |
uuid: entry_uuid, | |
values, | |
}, | |
}); | |
} | |
SchemaDatasourceingEvent::Rountrip(_uuid) => {} | |
}, | |
Err(_e) => {} | |
}, | |
} | |
} | |
Ok::<(), String>(()) | |
}) |
.map_err(|e| format!("{}", e)) | ||
.unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Error handling with .unwrap()
.
Currently calls .unwrap()
after .map_err(...)
. This can abruptly crash. Consider logging and early return instead.
- .map_err(|e| format!("{}", e))
- .unwrap();
+ .map_err(|e| format!("Failed to spawn file watcher: {}", e))?;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
.map_err(|e| format!("{}", e)) | |
.unwrap(); | |
.map_err(|e| format!("Failed to spawn file watcher: {}", e))?; |
io.extend_with(rpc::admin::SurfpoolAdminRpc.to_delegate()); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Admin RPC extension.
Potentially grants privileged capabilities. Consider requiring authentication or config gating in production.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
crates/core/src/rpc/full.rs (1)
941-949
: Consider enhancing test error handling.The test could benefit from more descriptive error handling. Instead of a generic panic message, consider providing more context about the expected and actual command received.
- _ => panic!("failed to receive transaction from mempool"), + cmd => panic!("Expected SimnetCommand::TransactionReceived, got {:?}", cmd),crates/core/src/test_helpers.rs (1)
30-31
: Consider using bounded channels for better resource management.The use of unbounded channels (
crossbeam_channel::unbounded()
) could potentially lead to memory issues if the producers send messages faster than they can be consumed. While this might be acceptable in test scenarios, consider using bounded channels (crossbeam_channel::bounded(size)
) to prevent potential memory exhaustion.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (2)
crates/core/src/rpc/full.rs
(3 hunks)crates/core/src/test_helpers.rs
(4 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: build
- GitHub Check: run_cargo_checks
🔇 Additional comments (4)
crates/core/src/rpc/full.rs (2)
377-410
: LGTM! Enhanced error handling for transaction processing.The changes improve error reporting by:
- Separating simulation and execution failures
- Including detailed error messages with log counts
- Providing structured error responses through
TransactionStatusEvent
412-421
: LGTM! Improved transaction confirmation status handling.The changes provide clearer status handling through structured
TransactionStatusEvent::Success
events with appropriate confirmation levels (Processed, Confirmed, Finalized).crates/core/src/test_helpers.rs (2)
20-20
: LGTM!The import of
SimnetCommand
aligns with the shift from mempool-based to command-based communication.
45-46
: LGTM!The initialization of the new command channels is consistent with the architectural changes described in the PR summary.
pub fn new_with_mempool(rpc: T, simnet_commands_tx: Sender<SimnetCommand>) -> Self { | ||
let mut setup = TestSetup::new(rpc); | ||
setup.context.mempool_tx = mempool_tx; | ||
setup.context.simnet_commands_tx = simnet_commands_tx; | ||
setup |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Rename method to reflect its current purpose.
The method name new_with_mempool
is misleading as it no longer deals with mempool transactions but rather with simnet commands. Consider renaming it to better reflect its current purpose.
Apply this diff to improve clarity:
-pub fn new_with_mempool(rpc: T, simnet_commands_tx: Sender<SimnetCommand>) -> Self {
+pub fn new_with_simnet_commands(rpc: T, simnet_commands_tx: Sender<SimnetCommand>) -> Self {
let mut setup = TestSetup::new(rpc);
setup.context.simnet_commands_tx = simnet_commands_tx;
setup
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
pub fn new_with_mempool(rpc: T, simnet_commands_tx: Sender<SimnetCommand>) -> Self { | |
let mut setup = TestSetup::new(rpc); | |
setup.context.mempool_tx = mempool_tx; | |
setup.context.simnet_commands_tx = simnet_commands_tx; | |
setup | |
pub fn new_with_simnet_commands(rpc: T, simnet_commands_tx: Sender<SimnetCommand>) -> Self { | |
let mut setup = TestSetup::new(rpc); | |
setup.context.simnet_commands_tx = simnet_commands_tx; | |
setup | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (2)
crates/cli/src/cli/simnet/mod.rs (1)
415-422
:⚠️ Potential issueReview logging of deployment status for sensitive information.
The debug log might expose sensitive deployment information. Consider sanitizing or redacting sensitive data before logging.
crates/core/src/simnet/mod.rs (1)
494-600
: 🛠️ Refactor suggestionAdd transaction validation and rate limiting.
The transaction processing logic lacks input validation and rate limiting, which could lead to resource exhaustion.
Consider:
- Adding input validation for transactions
- Implementing rate limiting
- Adding metrics for monitoring
♻️ Duplicate comments (2)
crates/cli/src/cli/simnet/mod.rs (1)
246-247
:⚠️ Potential issueError handling with
.unwrap()
.Currently calls
.unwrap()
after.map_err(...)
. This can abruptly crash. Consider logging and early return instead.- .map_err(|e| format!("{}", e)) - .unwrap(); + .map_err(|e| format!("Failed to spawn file watcher: {}", e))?;crates/core/src/simnet/mod.rs (1)
255-255
:⚠️ Potential issueSecure the Admin RPC extension.
The admin RPC extension potentially grants privileged capabilities. Consider requiring authentication or configuration gating in production.
🧹 Nitpick comments (4)
crates/core/src/types.rs (3)
28-39
: Add documentation and consider using a more flexible type for Entry values.The
Collection
andEntry
structs would benefit from documentation explaining their purpose and usage. Additionally, consider using a more flexible type forEntry.values
to support different value types.#[derive(Debug, Clone)] +/// Represents a collection of entries with a unique identifier and name. pub struct Collection { pub uuid: Uuid, pub name: String, pub entries: Vec<Entry>, } #[derive(Debug, Clone)] +/// Represents an entry with a unique identifier and key-value pairs. pub struct Entry { pub uuid: Uuid, - pub values: HashMap<String, String>, + pub values: HashMap<String, serde_json::Value>, }
51-55
: Remove commented code and document the enum.The
SchemaDatasourceingEvent
enum has commented-out parameters in theApplyEntry
variant. Either implement these parameters or remove the comment.#[derive(Debug, Clone, Deserialize, Serialize)] +/// Events related to schema datasourcing operations. pub enum SchemaDatasourceingEvent { Rountrip(Uuid), - ApplyEntry(Uuid, String), //, SubgraphRequest, u64), + ApplyEntry(Uuid, String), }
166-171
: Add documentation for SubgraphPluginConfig fields.The
SubgraphPluginConfig
struct would benefit from documentation explaining the purpose of each field.#[derive(Debug, Clone, Deserialize, Serialize)] +/// Configuration for a subgraph plugin. pub struct SubgraphPluginConfig { + /// Unique identifier for the plugin instance. pub uuid: Uuid, + /// IPC token for inter-process communication. pub ipc_token: String, + /// Request configuration for the subgraph. pub subgraph_request: SubgraphRequest, }crates/cli/src/cli/simnet/mod.rs (1)
88-88
: Future-proof the subgraph configuration initialization.The empty
SubgraphConfig
might cause issues when fields are added in the future. Consider adding a proper initialization method or builder pattern.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (7)
crates/cli/src/cli/mod.rs
(1 hunks)crates/cli/src/cli/simnet/mod.rs
(12 hunks)crates/cli/src/tui/simnet.rs
(2 hunks)crates/core/Cargo.toml
(2 hunks)crates/core/src/lib.rs
(1 hunks)crates/core/src/simnet/mod.rs
(14 hunks)crates/core/src/types.rs
(5 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- crates/cli/src/cli/mod.rs
- crates/cli/src/tui/simnet.rs
- crates/core/src/lib.rs
- crates/core/Cargo.toml
⏰ Context from checks skipped due to timeout of 90000ms (2)
- GitHub Check: run_cargo_checks
- GitHub Check: build
🔇 Additional comments (1)
crates/core/src/types.rs (1)
150-151
: Implement or document the empty SubgraphConfig struct.The
SubgraphConfig
struct is currently empty. Either implement the required fields or document why it's intentionally empty.#[derive(Clone, Debug, Default)] +/// Configuration for subgraph functionality. +/// TODO: Implement required configuration fields. pub struct SubgraphConfig {}
let geyser_plugin_config_file = PathBuf::from("../../surfpool_subgraph_plugin.json"); | ||
|
||
let contents = "{\"name\": \"surfpool-subgraph\", \"libpath\": \"target/release/libsurfpool_subgraph.dylib\"}"; | ||
let result: serde_json::Value = json5::from_str(&contents).unwrap(); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove hardcoded plugin configuration.
The plugin configuration is hardcoded with fixed paths and values. This should be configurable through proper configuration files.
- let geyser_plugin_config_file = PathBuf::from("../../surfpool_subgraph_plugin.json");
- let contents = "{\"name\": \"surfpool-subgraph\", \"libpath\": \"target/release/libsurfpool_subgraph.dylib\"}";
+ let geyser_plugin_config_file = config.plugin_config_path.get(0).ok_or_else(|| {
+ GeyserPluginManagerError::InvalidPluginPath
+ })?;
+ let mut file = File::open(geyser_plugin_config_file)?;
+ let mut contents = String::new();
+ file.read_to_string(&mut contents)?;
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
let geyser_plugin_config_file = PathBuf::from("../../surfpool_subgraph_plugin.json"); | |
let contents = "{\"name\": \"surfpool-subgraph\", \"libpath\": \"target/release/libsurfpool_subgraph.dylib\"}"; | |
let result: serde_json::Value = json5::from_str(&contents).unwrap(); | |
let geyser_plugin_config_file = config.plugin_config_path.get(0).ok_or_else(|| { | |
GeyserPluginManagerError::InvalidPluginPath | |
})?; | |
let mut file = File::open(geyser_plugin_config_file)?; | |
let mut contents = String::new(); | |
file.read_to_string(&mut contents)?; | |
let result: serde_json::Value = json5::from_str(&contents).unwrap(); |
Summary by CodeRabbit
New Features
Refactor
Tests / Chores