Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimized HashFieldExpireExecute #2851

Conversation

nimanikoo
Copy link

This PR improves the performance of the HashFieldExpireExecute method by reducing memory allocations.

Changes include:

  • Using a fixed-size array instead of List.
  • Using ArrayPool for arrays with more than 16 elements to reduce GC pressure.
  • Keeping the code simple and clear.

These changes can reduce memory allocations and improve execution speed in high-load scenarios.

…ove performance

- Replace List<RedisValue> with a fixed-size array.
- Use ArrayPool for arrays larger than 16 elements to lower GC pressure.
- Simplify the code without adding extra complexity.
@atakavci
Copy link
Contributor

hi @nimanikoo ,
thanks for working on this and help us improving performance 🙏

let me get a better understanding of it; how pool of array helps improving it? Eventually it needs to allocate a new array in all use cases of pool. To me it looks one more loop to place the values into finalValues array at the end of each HashFieldExpireExecute call.

@mgravell
Copy link
Collaborator

Indeed, as written this has no net gain (but pays the cost of talking to the pool); to use the pool properly we'd need to avoid allocating finalValues (L446 as-written). There is no benefit in using the pool just to copy the pool data to a new array each time - we'd need the pool use to span the overall command lifetime. My concern then, however, becomes async - we'd need to think very carefully about when we can recycle the pooled data. I think this scenario is more nuanced than this PR wants, sadly.

As an aside: note that as part of unrelated work, we have some things coming "soon pinky-promise" that will enable custom command interpreters; hopefully this should enable a much more efficient way of running this scenario, enabling (for example) better control of the

@atakavci
Copy link
Contributor

right, use of pool comes with its own price as well! aside from pool usage, i think replacing lists is fine; an array (expectedly with small number of items) is our best bet for raw performance in this case. what do you think @mgravell ?

@mgravell
Copy link
Collaborator

yes, a correct-sized array is the best we can do here unless we either a: figure out the async lifetime problem, or b: use per-scenario custom types so we can just use fields; both of those are much bigger than this PR; I think I'm going to close this out for now; if I've misunderstood something @nimanikoo , please prod me and I can look again!

@mgravell mgravell closed this Feb 27, 2025
@nimanikoo
Copy link
Author

nimanikoo commented Feb 27, 2025

Hi everyone,

Thanks for the thoughtful feedback.

I initially explored using ArrayPool to reduce memory allocations, as reusing arrays can cut down on GC pressure (see https://learn.microsoft.com/en-us/dotnet/api/system.buffers.arraypool-1?view=net-9.0). However, I agree that if we need to copy the pooled array to a new final array on each call, we add extra overhead that might cancel out the benefits—especially when the typical array size is small.

In this scenario, a fixed-size array approach is simpler and avoids the extra copy step. It reduces allocations compared to using a List and keeps the code straightforward. Given the expected number of fields is low and considering async safety, sticking with a simple fixed-size array seems to be the best trade-off.

I’m happy to revisit more advanced pooling strategies later if profiling indicates that this part becomes a performance bottleneck. Let me know your thoughts on proceeding with this simpler approach for now.

Best regards.

@mgravell @atakavci

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants