Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf(tx-pool): reuse write lock to insert txs batch #12806

Merged
merged 1 commit into from
Dec 6, 2024

Conversation

hai-rise
Copy link
Contributor

@hai-rise hai-rise commented Nov 23, 2024

Currently, each transaction insertion competes for a write lock of the pool. This can be very congested for performance chains that expect 100k+ new transactions every second. Even more challenging if the block time is small -- canonical state changes also compete for the write lock several times a second.

Ideally, the RPC server can buffer transactions to insert in a batch when the pool write lock is unavailable, instead of competing for the write lock (and block anyway) for every insertion request.

I guess this PR is a baby step towards that 😅, to start reusing a single write lock in PoolInner::add_transactions.

Copy link
Collaborator

@mattsse mattsse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, I think this makes sense,
even inserting a large number of transactions shouldn't block too long, can't remember what the numbers are but should only be in the µs

pedantic doc nit

@mattsse
Copy link
Collaborator

mattsse commented Nov 23, 2024

for the rpc part, this could be a dedicated task that receives all new txs, batches them and submits them in batches

sidecar: sidecar.clone(),
},
propagate: true,
test_pool.add_transactions(
Copy link
Contributor Author

@hai-rise hai-rise Nov 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed the test to use the pub function to ensure the pool always enforces the size in "real usage". The previous tested two private functions used in conjunction which wasn't as guaranteed (removing discarding in add_transactions would still pass the test).

@hai-rise
Copy link
Contributor Author

for the rpc part, this could be a dedicated task that receives all new txs, batches them and submits them in batches

I cannot work on this immediately so created #12811. Maybe someone familiar with the RPC codebase can kill it better!

@hai-rise
Copy link
Contributor Author

@mattsse Kind reminder that this is available for review again 🙏.

if discarded.is_empty() {
return added
}
if !discarded.is_empty() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why this change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is only refactoring that doesn't change semantics.

Previously:

  • If discarded is empty then early return added
  • Else make some changes to added
  • Return added at the end

Now:

  • If discarded is not empty then make changes to added
  • Return added at the end

It also groups discard handling in a self-explanatory if scope 🙏.

Comment on lines -886 to -894
// delete any blobs associated with discarded blob transactions
self.delete_discarded_blobs(discarded.iter());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this logic has been removed now, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's moved to the only call site (add_transactions) now, in the if !discarded.is_empty() check.

Copy link
Collaborator

@mattsse mattsse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gotcha, thanks for walking me through this,

makes sense, the only location where have larger batches is in p2p and inserting + discarding should be quite fast even for multiple txs

@mattsse
Copy link
Collaborator

mattsse commented Dec 6, 2024

if you want me to take care of conflicts, please open prs from non-org forks :)

pending conflicts

@hai-rise
Copy link
Contributor Author

hai-rise commented Dec 6, 2024

@mattsse No worries, let us fix all the conflicts (done for this PR) 🙏.

@mattsse mattsse added this pull request to the merge queue Dec 6, 2024
@mattsse mattsse added the A-tx-pool Related to the transaction mempool label Dec 6, 2024
Merged via the queue into paradigmxyz:main with commit 634db30 Dec 6, 2024
41 checks passed
@hai-rise hai-rise deleted the reuse-write-lock branch December 6, 2024 13:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-tx-pool Related to the transaction mempool
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants