Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Behavioural test github workflow update #957

Merged
merged 6 commits into from
Mar 24, 2023
Merged

Behavioural test github workflow update #957

merged 6 commits into from
Mar 24, 2023

Conversation

manasaV3
Copy link
Collaborator

@manasaV3 manasaV3 commented Mar 21, 2023

Description

Closes #818

  • This fixes the broken GitHub workflow to execute the integration tests and executes it against the dev environment
  • It also updates the tests for metrics to use usage and maintenance

@manasaV3 manasaV3 added the improvement Release Label: Used for categorizing improvements in automated CI release notes label Mar 21, 2023
@manasaV3 manasaV3 marked this pull request as ready for review March 21, 2023 21:14
@manasaV3 manasaV3 changed the title Bdd workflow update Behavioural workflow update Mar 21, 2023
@manasaV3 manasaV3 changed the title Behavioural workflow update Behavioural test github workflow update Mar 21, 2023
codemonkey800
codemonkey800 previously approved these changes Mar 21, 2023
Copy link
Collaborator

@codemonkey800 codemonkey800 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔥

And it should have 12 entries for usage.timeline
And it should have at least one non-zero installs in usage.timeline
And it should have non-zero values for usage.stats
# And it should have 10 entries for maintenance.timeline
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know there is a related issue open for this #958, but overall I am not sure we should be hardcoding these numbers. If we define a limit, it seems sufficient to look for <= to that limit.

Copy link
Contributor

@klai95 klai95 Mar 22, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mentioned it #958 (comment), but I think it is relevant to bring up how we should approach filling in default values for missing data moving forward. Currently, it is done on the backend for usage metrics, and on the frontend for maintenance metrics. As we get clarity on that, I think it would address some of the confusion / expectation for these behavioral tests as well. This feedback is relevant in many parts of the PR, and pretty much areas where @richaagarwal has flagged already, so i won't copy and paste the same comment to other places.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@richaagarwal, as @klai95 mentioned, the check against the limit is in place only because the current expectation is to pad the response for missing entries. We can also update this check if that expectation is to be modified.

Note: Also, the hardcoded value should be 12. I was testing against 10, and it got committed accidentally.

And it should have non-zero values for usage.stats
# And it should have 10 entries for maintenance.timeline
And it should have at least one non-zero commits in maintenance.timeline
And it should have non-zero values for maintenance.stats

Scenario: metrics api for invalid plugin
Copy link
Collaborator

@richaagarwal richaagarwal Mar 22, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These tests make me think we should re-think what we return for nonexistent plugins. Perhaps it could be tackled as part of #947.

And it should have 5 entries for usage.timeline
And it should have at least one non-zero installs in usage.timeline
And it should have non-zero values for usage.stats
# And it should have 5 entries for maintenance.timeline
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar feedback as this comment - https://github.com/chanzuckerberg/napari-hub/pull/957/files#r1144121074

If the limit is 5, we should look for <= 5

And it should have 15 entries for usage.timeline
And it should have all zero installs in usage.timeline
And it should have zero values for usage.stats
# And it should have 15 entries for maintenance.timeline
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@then(parsers.parse('it should have non-zero values for usage.stats'))
def verify_it_has_non_zero_usage_stats(context):
stats = context['usage']['stats']
assert stats['installs_in_last_30_days'] >= 0
Copy link
Contributor

@klai95 klai95 Mar 22, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, in which case would stats['installs_in_last_30_days'] >= 0 be false? It looks to me the installs would always be a number that is at least 0?

Copy link
Collaborator Author

@manasaV3 manasaV3 Mar 22, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but as we are checking for a specific plugin, we cannot confirm that the plugin has had installs in the last 30 days.

This highlights that we might have to revisit what we are testing.

@klai95
Copy link
Contributor

klai95 commented Mar 22, 2023

Description

Closes #818

  • This fixes the broken GitHub workflow to execute the integration tests and executes it against the dev environment
  • It also updates the tests for metrics to use usage and maintenance

Correct me if I am wrong, but I thought we have decided to close issues manually after releases, instead of closing the issue as the PR gets merged in.

@richaagarwal
Copy link
Collaborator

Description

Closes #818

  • This fixes the broken GitHub workflow to execute the integration tests and executes it against the dev environment
  • It also updates the tests for metrics to use usage and maintenance

Correct me if I am wrong, but I thought we have decided to close issues manually after releases, instead of closing the issue as the PR gets merged in.

@klai95 You can find the instructions on linking or closing issues with PRs in the pull request template: https://raw.githubusercontent.com/chanzuckerberg/napari-hub/main/.github/pull_request_template.md. In this case, tests are updated immediately so no deployment is required to close the issue.

Copy link
Contributor

@klai95 klai95 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nicely done! No more blocking feedback. The only thing I'd like to say is that we should document or keep in mind these behavioral tests will continuously need to be updated based on the implementation of the API endpoints

@manasaV3 manasaV3 merged commit 5759f05 into main Mar 24, 2023
@manasaV3 manasaV3 deleted the bdd-workflow-update branch March 24, 2023 18:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
improvement Release Label: Used for categorizing improvements in automated CI release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Integration tests for backend API
4 participants