-
Notifications
You must be signed in to change notification settings - Fork 734
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement new Solana watcher #248
Labels
Milestone
Comments
It appears that (2) cannot be implemented safely - there's no way to determine what program originated a log, and the caller could spoof the Wormhole program. |
leoluk
pushed a commit
that referenced
this issue
Jul 29, 2021
CPI part is untested. Commitment level is hardcoded to "finalized", but can be refactored to use both "committed" and "finalized" later. #248 Change-Id: I5ae7711c306b33650367e6f7a417ab9d88753612
leoluk
pushed a commit
that referenced
this issue
Jul 29, 2021
#248 Change-Id: Iae4b4d187e8d6728de9087e43c5f8a7b4d821540
leoluk
pushed a commit
that referenced
this issue
Jul 29, 2021
#248 Change-Id: I093d619cb82b35b963447cf4a5dc18ef6be1a0f5
leoluk
pushed a commit
that referenced
this issue
Jul 29, 2021
#248 Change-Id: I98abc6b4e635b8b5679fcda5342c90b0e5c96077
leoluk
pushed a commit
that referenced
this issue
Jul 29, 2021
#248 Change-Id: Ib40b6016bda19e17c4700db6b39dbf340dfc0f4c
We implemented (1) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
In Wormhole v1, we were able to get away using program accounts as a queue. Unprocessed lockups were added as new program accounts, and would be deleted after they were either executed or stored on Solana.
This was a very nice primitive, since it would allow us to do a cheap getProgramAccounts call (at least with
--account-index program-id
on the RPC node) to get all pending accounts.https://github.com/certusone/wormhole/blob/ddf180a5eb467f1d0162f081d09c4abbe6b441e7/bridge/pkg/solana/client.go#L115
Unfortunately, this is not a workable approach for v2: We no longer use Solana for data availability, so the Solana contract won't know when a transfer is complete and therefore, the queue approach is not working anymore – getProgramAccounts is an O(n) operation, with n = number of program accounts, or even n = all accounts on nodes without
--account-index
.We discussed a couple of possible approaches:
Switch to a slot-based approach, using the getConfirmedBlock API to fetch every single slot, filter for Wormhole transactions client-side and then request all the individual accounts. This is what Solana themselves recommends exchanges to do. Requires the expensive
--enable-rpc-transaction-history
flag on the RPC node. Requesting a mainnet-beta block is currently ~300ms with ~800KiB of output, which isn't great but also not terrible and in the same ballpark latency-wise as full scans.Do (1), but instead of creating accounts, emit log messages instead. Requires
--enable-cpi-and-log-storage
in addition to--enable-rpc-transaction-history
. This may or may not be cheaper than using accounts, but most importantly, it causes the messages to be part of the transaction - they'll be in getConfirmedBlock output, which avoids having to make extra RPC calls - and can be requested and replayed for any historic slot depending on a node's retention config. Account state isn't currently persisted (see Ability to query historic account state solana-labs/solana#18197).Keep using getProgramAccounts, but limit n by reducing rent, effectively implementing a sliding window. We can request a subset of all accounts within the limit by doing a memcmp on a slot or timestamp prefix. This has the advantage of only relying on account state, which all nodes are guaranteed to hold, requiring only
--account-index program-id
, but once we're beyond the sliding window it won't be possible to retrieve state at all. What's nice about this option is doing fewer RPC calls operating only on Wormhole state, at the expense of potentially having to re-process a lot of txs within the sliding window.CC @hendrikhofstadt @Reisen @calebrate
The text was updated successfully, but these errors were encountered: