-
-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance optimizations of various statistics #201
Comments
Make sure you account for the plan to add the median (see #181). |
After some considerations, the median seems to be implementable as follows:
An AVL tree does all of the operations above in I used the
Note that I'm not counting the initial sorting in the performance (which is Maybe I'm overcomplicating things, but it seems to me that we need a tree-like structure for this; at first I've considered using the |
Is your feature request related to a problem? Please describe.
Not really a problem, more like a potential optimization (I haven't worked out the details to see if it actually works).
So, if I understood how the algorithm works under the hood, we basically move the points in the dataset, one at a time, then, at each iteration, compute the mean, the standard deviation, and the correlation coefficient of this new dataset.
One thing that stands out performance-wise is that we currently use all of the points to compute the statistics at each step, which seems a bit wasteful.
Describe the solution you'd like
Instead of computing the statistics of the whole dataset, which requires at least iterating over all$n$ points (even more ops for the stdev/corrcoef), we can use the fact that we are only moving one point, and rewrite the new statistics in terms of old ones + a perturbation. For instance, for the new value of the mean statistic, we get:
where$\delta = x'_i - x_i$ , and $n$ is the number of points in the dataset. Analogous formulas can be derived for the variance, which is the square of the stdev anyway (it's possible some tweaking of the denominators is needed when taking into account the Bessel correction):
and probably for the correlation coefficient (or better, its square) as well. This would allow us to compute all of the statistics in basically$O(1)$ time, instead of $O(n)$ or larger.
There's at least one problem which I haven't worked out yet: is this numerically stable? Since numerical accuracy is paramount for the code to work properly, if the above has a large loss of accuracy, then it's not very useful, but if it's stable, it could be worthwhile to explore implementing it.
Some references that could be of use (regarding both the computation and numerical stability):
Describe alternatives you've considered
None.
Additional context
None.
The text was updated successfully, but these errors were encountered: