Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ci/tests #9

Merged
merged 8 commits into from
Sep 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
115 changes: 115 additions & 0 deletions .github/workflows/CI.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
name: CI

# Concurrency group that uses the workflow name and PR number if available
# or commit SHA as a fallback. If a new build is triggered under that
# concurrency group while a previous build is running it will be canceled.
# Repeated pushes to a PR will cancel all previous builds, while multiple
# merges to main will not cancel.
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true

env:
FORCE_COLOR: "1" # Make tools pretty

permissions: {}

on:
push:
branches:
- main
pull_request:

jobs:
# Run our test suite on various combinations of OS & Python versions
run-pytest:
strategy:
fail-fast: false
matrix:
python-version: ["3.10", "3.11", "3.12"]
# test fail on windows for now
os: [ubuntu-22.04, ubuntu-latest, macos-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install poetry
uses: abatilo/actions-poetry@v3
- name: Setup poetry
run: |
poetry config virtualenvs.create true --local
poetry config virtualenvs.in-project true --local
- uses: actions/cache@v4
name: Define a cache for the virtual environment based on the dependencies file
with:
path: |
./.venv
poetry.lock
key: venv-${{ hashFiles('pyproject.toml') }}-${{ matrix.python-version }}-${{ matrix.os }}
- name: Install the project dependencies
run: poetry install --with test
- name: Run the test
run: |
poetry run coverage run -m pytest
poetry run coverage xml
- name: Upload Coverage to Codecov
if: matrix.python-version == '3.11' && matrix.os == 'ubuntu-latest'
uses: codecov/codecov-action@v4
with:
fail_ci_if_error: true
token: ${{ secrets.CODECOV_TOKEN }}

build-site:
name: "build docs"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we run this stage as a safetly measure to detect warnings in the doc (that are not detected by RTD's builds) and the publish of the doc is handeled by RTD right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

runs-on: ubuntu-latest
steps:
- name: "Checkout repository 🛎"
uses: actions/checkout@v4
- name: Install poetry
uses: abatilo/actions-poetry@v3
- name: "Install pandoc 📝"
uses: r-lib/actions/setup-pandoc@v2
with:
pandoc-version: "latest"
- name: Setup poetry
run: |
poetry config virtualenvs.create true --local
poetry config virtualenvs.in-project true --local
- uses: actions/cache@v4
name: Define a cache for the virtual environment based on the dependencies lock file
with:
path: |
./.venv
poetry.lock
key: venv-${{ hashFiles('pyproject.toml') }}
- name: Install the project dependencies
run: poetry install --with test
- name: "Build docs and check for warnings 📖"
shell: bash
run: |
poetry run sphinx-build docs docs/_build -W

run-pyright:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not run pre-commit altogether which will include pyright but also black, ruff, etc ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pre-commit is already run with pre-commit.ci.

You can see actually that it's run at every commit in this PR, when you show all the checks.

And pre-commit no longer runs pyright, because it's a job that needs an installed environment, contrary to other linters, hence the dedicated job here

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's what I see in the checks 🤔 I'm I missing something ?
image

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image You can scroll the list of tests inside the mbedded window ;)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that pre-commit, codecov and RTD are not github actions but webhook, that's why they only appear on the test breakdown insidd the PR but not in the Actions tab

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks ! :)

runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install poetry
uses: abatilo/actions-poetry@v3
- name: Setup poetry
run: |
poetry config virtualenvs.create true --local
poetry config virtualenvs.in-project true --local
- uses: actions/cache@v4
name: Define a cache for the virtual environment based on the dependencies file
with:
path: |
./.venv
poetry.lock
key: venv-${{ hashFiles('pyproject.toml') }}
- name: Install the project dependencies
run: poetry install --with test
- run: echo "$PWD/.venv/bin" >> $GITHUB_PATH
- uses: jakebailey/pyright-action@v2
name: Run pyright
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

## Unreleased

### Added

- Github CI, with pytest, codecov, sphinx and pyright #9

### Changed

- Move codebase to github
Expand Down
19 changes: 12 additions & 7 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ PyYAML = "^6.0.1"
imageio = "^2.31.3"
imagesize = "^1.4.1"
POT = "^0.9.1"
jsonschema-rs = "^0.16.3"
jsonschema-rs = "^0.18.3"
scikit-learn = "^1.3.0"
typing-extensions = "^4.7.1"
watchdog = "^3.0.0"
Expand Down Expand Up @@ -61,14 +61,15 @@ scipy= ">=1, <1.14"


[tool.poetry.group.test.dependencies]
pytest = "^7.4.2"
pycocotools = "^2.0.7"
pre-commit = "^3.4.0"
coverage = "^7.3.1"
pandas-stubs = "^2.1.4.231227"
pytest-sugar = "^0.9.7"
pytest = "^8.3.3"
pycocotools = "^2.0.8"
pre-commit = "^3.8.0"
coverage = "^7.6.1"
pandas-stubs = "^2.2.2.240909"
pytest-sugar = "^1.0.0"
ipykernel = "^6.29.0"
ipywidgets = "^8.0.3"
pytest-xdist = "^3.6.1"

[tool.poetry.group.docs.dependencies]
Sphinx = "^7.2.5"
Expand Down Expand Up @@ -106,3 +107,7 @@ testpaths = ["test_lours", "lours"]
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"

# Only measure coverage for the lours package
[tool.coverage.run]
source = ["lours"]
4 changes: 4 additions & 0 deletions test_lours/test_dataset/test_io.py
Original file line number Diff line number Diff line change
Expand Up @@ -532,6 +532,7 @@ def test_from_files():
dataset.check()


@pytest.mark.xdist_group(name="fiftyone-group")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, why did you group all fiftyone tests?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is done to ensure they are run in the same process.

Now, you can launch the tests locally with multiple proceses, thanks to pytest xdist : https://pytest-xdist.readthedocs.io/en/stable/

But the tests involving fiftyone must be run sequentially becasue there can be only one mongodb runnin at the same time.

def test_to_fiftyone():
val_dataset = from_coco(
coco_json=DATA / "coco_dataset/annotations_valid.json",
Expand Down Expand Up @@ -563,6 +564,7 @@ def test_to_fiftyone():
fo.delete_dataset("dataset")


@pytest.mark.xdist_group(name="fiftyone-group")
def test_to_fiftyone_empty():
val_dataset = from_coco(
coco_json=DATA / "coco_dataset/annotations_valid.json",
Expand All @@ -578,6 +580,7 @@ def test_to_fiftyone_empty():
fo.delete_dataset("dataset")


@pytest.mark.xdist_group(name="fiftyone-group")
def test_to_fiftyone_debooleanize():
dataset = from_caipy_generic(
annotations_folder=DATA / "caipy_dataset" / "tags" / "default_schema",
Expand All @@ -596,6 +599,7 @@ def test_to_fiftyone_debooleanize():
fo.delete_dataset("dataset")


@pytest.mark.xdist_group(name="fiftyone-group")
def test_to_fiftyone_keypoint():
dataset = from_coco_keypoints(
coco_json=DATA / "coco_dataset/annotations_keypoints.json",
Expand Down
1 change: 1 addition & 0 deletions test_lours/test_dataset/test_tags.py
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,7 @@ def test_caipy_tags():
assert result == initial


@pytest.mark.xdist_group(name="fiftyone-group")
def test_caipy_tags_to_fiftyone():
dataset = from_caipy_generic(
annotations_folder=DATA / "caipy_dataset" / "tags" / "default_schema",
Expand Down
1 change: 1 addition & 0 deletions test_lours/test_evaluation/test_io.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
DATA = HERE.parent / "test_data"


@pytest.mark.xdist_group(name="fiftyone-group")
def test_fiftyone():
dataset = from_coco(
coco_json=DATA / "coco_dataset/annotations_valid_random.json",
Expand Down
62 changes: 32 additions & 30 deletions test_lours/test_utils/test_grouper.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ def test_make_data_pandas_compatible():
coco_json=DATA / "coco_eval/instances_val2017.json", images_root=Path(".")
)
data = coco.annotations
np.random.seed(0)
data["fake_id"] = np.random.randint(0, 10, len(data))
data["fake_size"] = np.random.randn(len(data))
group_name, pandas_group, is_category = grouper.make_pandas_compatible(
Expand Down Expand Up @@ -115,6 +116,7 @@ def test_label_types():
coco_json=DATA / "coco_eval/instances_val2017.json", images_root=Path(".")
)
data = coco.annotations
np.random.seed(0)
data["fake_id"] = np.random.randint(0, 10, len(data))
data["fake_size"] = np.random.randn(len(data))

Expand All @@ -123,16 +125,16 @@ def test_label_types():
)[1]
target = np.array(
[
-3.7345,
-2.941,
-2.1515,
-1.362,
-0.5725,
0.217,
1.0065,
1.796,
2.5855,
3.375,
-3.542,
-2.74,
-1.9415,
-1.143,
-0.3448,
0.4537,
1.252,
2.0505,
2.849,
3.647,
]
)
assert_almost_equal(pandas_group.cat.categories.to_numpy(), target)
Expand All @@ -143,16 +145,16 @@ def test_label_types():

target = np.array(
[
-3.57622866,
-2.81813268,
-2.03869917,
-1.29311724,
-0.54028369,
0.20698126,
0.95833345,
1.71192913,
2.45991059,
3.2138484,
-3.3643091,
-2.5960633,
-1.8442534,
-1.0818792,
-0.3264371,
0.4304018,
1.1904581,
1.9497437,
2.7433336,
3.4620295,
]
)
assert_almost_equal(pandas_group.cat.categories.to_numpy(), target)
Expand All @@ -163,16 +165,16 @@ def test_label_types():

target = np.array(
[
-3.50361545,
-2.76894588,
-1.99502576,
-1.264642,
-0.52548114,
0.20053149,
0.9358896,
1.67847603,
2.40298958,
3.16176182,
-3.28356,
-2.5578789,
-1.8114079,
-1.0532277,
-0.3176548,
0.4207354,
1.1638769,
1.907607,
2.6944044,
3.382559,
]
)
assert_almost_equal(pandas_group.cat.categories.to_numpy(), target)