-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EPIC: CacheKV Improvements #14377
Comments
I have an idea for the other two items, which is just to copy the cache store itself. ContextCurrent implementation of nested cache stores has most operations with Nested performance matters because first of call, there's at least two layers of nesting in tx execution, Another important use case for nested cache is for better integrating with EVM, EVM support the message calls that can revert on exception. Currently ethermint support this by adopting the journal log approach from go-ethereum itself, but it prevents more natural integration with cosmos native functionalities in the form of precompiled contract1. In this case we need efficient cache store with arbitrarily depth of nesting. So it's important to make nested cache store operate in Copy-On-Write BTreeCurrently we have migrated to tidwall/btree to support the cache store, and using the copy-on-write feature to do the safe iteration, but there are more potentials in that. First of all, let's do this 2 to unify the cache store with a single btree. Then we can support a cheap // Clone the cache store. This is a copy-on-write operation and is very fast because
// it only performs a shadowed copy.
func (store *Store) Clone() *Store {
return &Store{
cache: store.cache.Copy(),
parent: store.parent,
}
} So instead of doing: msCache := ctx.ms.CacheMultiStore()
...
msCache.Get(key) // it recursively calls into each layer of cache stack when not found in cache.
...
msCache.Write() // significant overhead we can do: cacheMS := ctx.ms.Clone()
...
cacheMS.Get(key) // it only calls the uncached parent when not found in cache.
...
ctx = ctx.WithMultiStore(cacheMS) // zero overhead ConsequencesPositives
Negatives
Footnotes |
@yihuang I love this idea but I have concerns about:
The fact that we cannot write on the parent seems like a design flaw to me. I think if we can devise a way in which we can still write to the original root/parent store, then this approach would be sound. WDYT? |
It seems almost all the use cases for branching out cache store don't use the original one simultaneously. |
Put it another way, we can support sth like this:
|
Seems reasonable to me :) |
Closes: cosmos#14377 Solution: - Unify cachekv's cache content with single tidwall/btree. - Use the copy-on-write supported of tidwall/btree to implement cheap `Clone`/`Restore` methods in cache store. - Add `RunAtomic` method in Context to provide more efficient store branching out, it don't notifies the tracing module because it don't have the `Write` process as before. - API breaking: Context always hold a `CacheMultiStore` instead of `MultiStore`. - Refactor runTx and other module logics to use new `RunAtomic` method instead of `CacheContext`.
We're rocking this in Polaris: https://github.com/berachain/polaris/blob/main/store/snapmulti/store.go |
Summary
cachekv is an important component in cosmos-sdk execution, we identified several performance or thread-safty improvement opportunities.
Problem Definition
Work Breakdown
in nested case, each layer of cache store will cache the readed kv-pairs.
Solution: only cache clean pairs in lowest layer?
Write
method in nested cases.the
Write
method will always collect the keys and sort them before write into parent store one by one, but if the parent store is also a cache store, this sorting effort is wasted.Solution: support batch Write method in store interface, and let underlying store to decide if it want to sort the change setes.
The text was updated successfully, but these errors were encountered: