-
Notifications
You must be signed in to change notification settings - Fork 795
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixing dynamic re-work and trend from elections #1968
Conversation
…now tested elsewhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice I like the concept, that'll be a lot easier to manage and the precision loss is immaterial.
… the extreme one-off difficulties
So fad8c68 fixed the issues observed on beta where the active difficulty was extremely high. Since the max multiplier can easily be up in the tens of thousands even while computing work at base threshold, the average was biased towards these values. Median is correct here and fixed the issue. |
Closes #1963 . It's been cherry picked in this PR.
Discussion
nano::difficulty::to/from_multiplier
use the network threshold as the second argument. Where could we provide these methods using a default for that argument?Explanation of PR
nano::difficulty
namespace in lib/numbersDifficulty arithmetic is a pain, it must be done under the "multiplier space", that is, must first transform the difficulty to a multiplier, and only then we're able to perform sums/divisions/etc.
To get a difficulty into multiplier space:
multiplier = ((1<<64) - base_difficulty) / ((1<<64) - difficulty)
. In C++,(1<<64) - value
is the same as(-value)
if value is unsigned int 64 by abusing underflow.Note that the
difficulty
being in the denominator is what makes it not trivial to perform arithmetic in the value space. The expression(a/x + a/y + a/z)
cannot be simplified further.Once in multiplier space, we can perform averages / trend analysis (like active difficulty trend), and at the end go back to a difficulty by inverting the previous operation. Precision loss in going back and forth is minimal and irrelevant in most (all?) use cases.