-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Behavioural test github workflow update #957
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔥
And it should have 12 entries for usage.timeline | ||
And it should have at least one non-zero installs in usage.timeline | ||
And it should have non-zero values for usage.stats | ||
# And it should have 10 entries for maintenance.timeline |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know there is a related issue open for this #958, but overall I am not sure we should be hardcoding these numbers. If we define a limit, it seems sufficient to look for <= to that limit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mentioned it #958 (comment), but I think it is relevant to bring up how we should approach filling in default values for missing data moving forward. Currently, it is done on the backend for usage metrics, and on the frontend for maintenance metrics. As we get clarity on that, I think it would address some of the confusion / expectation for these behavioral tests as well. This feedback is relevant in many parts of the PR, and pretty much areas where @richaagarwal has flagged already, so i won't copy and paste the same comment to other places.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@richaagarwal, as @klai95 mentioned, the check against the limit is in place only because the current expectation is to pad the response for missing entries. We can also update this check if that expectation is to be modified.
Note: Also, the hardcoded value should be 12. I was testing against 10, and it got committed accidentally.
And it should have non-zero values for usage.stats | ||
# And it should have 10 entries for maintenance.timeline | ||
And it should have at least one non-zero commits in maintenance.timeline | ||
And it should have non-zero values for maintenance.stats | ||
|
||
Scenario: metrics api for invalid plugin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These tests make me think we should re-think what we return for nonexistent plugins. Perhaps it could be tackled as part of #947.
And it should have 5 entries for usage.timeline | ||
And it should have at least one non-zero installs in usage.timeline | ||
And it should have non-zero values for usage.stats | ||
# And it should have 5 entries for maintenance.timeline |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar feedback as this comment - https://github.com/chanzuckerberg/napari-hub/pull/957/files#r1144121074
If the limit is 5, we should look for <= 5
And it should have 15 entries for usage.timeline | ||
And it should have all zero installs in usage.timeline | ||
And it should have zero values for usage.stats | ||
# And it should have 15 entries for maintenance.timeline |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar feedback as this comment https://github.com/chanzuckerberg/napari-hub/pull/957/files#r1144122290
@then(parsers.parse('it should have non-zero values for usage.stats')) | ||
def verify_it_has_non_zero_usage_stats(context): | ||
stats = context['usage']['stats'] | ||
assert stats['installs_in_last_30_days'] >= 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious, in which case would stats['installs_in_last_30_days'] >= 0
be false? It looks to me the installs would always be a number that is at least 0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but as we are checking for a specific plugin, we cannot confirm that the plugin has had installs in the last 30 days.
This highlights that we might have to revisit what we are testing.
Correct me if I am wrong, but I thought we have decided to close issues manually after releases, instead of closing the issue as the PR gets merged in. |
@klai95 You can find the instructions on linking or closing issues with PRs in the pull request template: https://raw.githubusercontent.com/chanzuckerberg/napari-hub/main/.github/pull_request_template.md. In this case, tests are updated immediately so no deployment is required to close the issue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nicely done! No more blocking feedback. The only thing I'd like to say is that we should document or keep in mind these behavioral tests will continuously need to be updated based on the implementation of the API endpoints
Description
Closes #818
usage
andmaintenance