Skip to content

Commit

Permalink
Clean up nf-test tests and snapshot (#1809)
Browse files Browse the repository at this point in the history
Close #1807

<!--
# nf-core/sarek pull request

Many thanks for contributing to nf-core/sarek!

Please fill in the appropriate checklist below (delete whatever is not
relevant).
These are the most common things requested on pull requests (PRs).

Remember that PRs should be made against the dev branch, unless you're
preparing a pipeline release.

Learn more about contributing:
[CONTRIBUTING.md](https://github.com/nf-core/sarek/tree/master/.github/CONTRIBUTING.md)
-->

## PR checklist

- [ ] This comment contains a description of changes (with reason).
- [ ] If you've fixed a bug or added code that should be tested, add
tests!
- [ ] If you've added a new tool - have you followed the pipeline
conventions in the [contribution
docs](https://github.com/nf-core/sarek/tree/master/.github/CONTRIBUTING.md)
- [ ] If necessary, also make a PR on the nf-core/sarek _branch_ on the
[nf-core/test-datasets](https://github.com/nf-core/test-datasets)
repository.
- [ ] Make sure your code lints (`nf-core pipelines lint`).
- [ ] Ensure the test suite passes (`nextflow run . -profile test,docker
--outdir <OUTDIR>`).
- [ ] Check for unexpected warnings in debug mode (`nextflow run .
-profile debug,test,docker --outdir <OUTDIR>`).
- [ ] Usage Documentation in `docs/usage.md` is updated.
- [ ] Output Documentation in `docs/output.md` is updated.
- [ ] `CHANGELOG.md` is updated.
- [ ] `README.md` is updated (including new tool citations and
authors/contributors).
  • Loading branch information
maxulysse authored Feb 25, 2025
1 parent e705fb9 commit 0b9196a
Show file tree
Hide file tree
Showing 23 changed files with 97 additions and 367 deletions.
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,12 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
### Changed

- [1806](https://github.com/nf-core/sarek/pull/1806) - Migrate pipeline pytest vcf concatenation tests to nf-test
- [1809](https://github.com/nf-core/sarek/pull/1809) - Replace `getReadsMD5()` by `readsMD5` from `nft-bam` plugin for more global cohesion with usage of `nft-vcf` plugin

### Fixed

- [1806](https://github.com/nf-core/sarek/pull/1806) - Fix some nf-test assertions
- [1809](https://github.com/nf-core/sarek/pull/1809) - Deals with nf-test snapshoting empty lists in a better way (https://github.com/nf-core/sarek/issues/1807)

### Removed

Expand Down
2 changes: 1 addition & 1 deletion tests/aligner-bwa-mem.nf.test
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ nextflow_pipeline {
// All files with stable contents
stable_path,
// All cram files
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).getReadsMD5() ] }
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).readsMD5 ] }
).match() }
)
}
Expand Down
2 changes: 1 addition & 1 deletion tests/aligner-bwa-mem2.nf.test
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ nextflow_pipeline {
// All files with stable contents
stable_path,
// All cram files
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).getReadsMD5() ] }
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).readsMD5 ] }
).match() }
)
}
Expand Down
2 changes: 1 addition & 1 deletion tests/aligner-dragmap.nf.test
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ nextflow_pipeline {
// All files with stable contents
stable_path,
// All cram files
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).getReadsMD5() ] }
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).readsMD5 ] }
).match() }
)
}
Expand Down
2 changes: 1 addition & 1 deletion tests/default.nf.test
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ nextflow_pipeline {
// All files with stable contents
stable_path,
// All cram files
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).getReadsMD5() ] },
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).readsMD5 ] },
// All vcf files
vcf_files.collect{ file -> [ file.getName(), path(file.toString()).vcf.variantsMD5] }
).match() }
Expand Down
4 changes: 2 additions & 2 deletions tests/save_output_as_bam.nf.test
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,9 @@ nextflow_pipeline {
// All stable path name, with a relative path
stable_name,
// All files with stable contents
stable_path,
stable_path.isEmpty() ? 'No stable content' : stable_path,
// All bam files
bam_files.collect{ file -> [ file.getName(), bam(file.toString()).getReadsMD5() ] }
bam_files.collect{ file -> [ file.getName(), bam(file.toString()).readsMD5 ] }
).match() }
)
}
Expand Down
12 changes: 5 additions & 7 deletions tests/save_output_as_bam.nf.test.snap
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,7 @@
"preprocessing/mapped/test/test.sorted.bam.bai",
"reference"
],
[

],
"No stable content",
[
[
"test.sorted.bam",
Expand All @@ -43,9 +41,9 @@
]
],
"meta": {
"nf-test": "0.9.0",
"nextflow": "24.09.0"
"nf-test": "0.9.2",
"nextflow": "24.10.4"
},
"timestamp": "2024-10-08T11:11:44.283548"
"timestamp": "2025-02-25T10:18:22.407985941"
}
}
}
4 changes: 2 additions & 2 deletions tests/saved_mapped.nf.test
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,9 @@ nextflow_pipeline {
// All stable path name, with a relative path
stable_name,
// All files with stable contents
stable_path,
stable_path.isEmpty() ? 'No stable content' : stable_path,
// All cram files
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).getReadsMD5() ] }
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).readsMD5 ] }
).match() }
)
}
Expand Down
6 changes: 2 additions & 4 deletions tests/saved_mapped.nf.test.snap
Original file line number Diff line number Diff line change
Expand Up @@ -32,9 +32,7 @@
"preprocessing/mapped/test/test.sorted.cram.crai",
"reference"
],
[

],
"No stable content",
[
[
"test.sorted.cram",
Expand All @@ -46,6 +44,6 @@
"nf-test": "0.9.2",
"nextflow": "24.10.4"
},
"timestamp": "2025-02-24T19:13:29.577785838"
"timestamp": "2025-02-25T10:19:32.705278311"
}
}
2 changes: 1 addition & 1 deletion tests/sentieon.nf.test
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ nextflow_pipeline {
// All files with stable contents
stable_path,
// All cram files
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).getReadsMD5() ] }
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).readsMD5 ] }
).match() }
)
}
Expand Down
16 changes: 8 additions & 8 deletions tests/start_from_markduplicates.nf.test
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ nextflow_pipeline {
tag "pipeline"
tag "pipeline_sarek"

test("Run with profile test | --input tests/csv/3.0/mapped_single_bam.csv --step markduplicates --tools null ") {
test("Run with profile test | --input tests/csv/3.0/mapped_single_bam.csv --step markduplicates --tools null") {

when {
params {
Expand Down Expand Up @@ -37,13 +37,13 @@ nextflow_pipeline {
// All files with stable contents
stable_path,
// All cram files
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).getReadsMD5() ] }
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).readsMD5 ] }
).match() }
)
}
}

test("Run with profile test | --input tests/csv/3.0/mapped_single_bam.csv --step markduplicates --skip_tools markduplicates --tools null ") {
test("Run with profile test | --input tests/csv/3.0/mapped_single_bam.csv --step markduplicates --skip_tools markduplicates --tools null") {

when {
params {
Expand Down Expand Up @@ -76,14 +76,14 @@ nextflow_pipeline {
// All files with stable contents
stable_path,
// All cram files
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).getReadsMD5() ] }
cram_files.isEmpty() ? 'No CRAM files' : cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).readsMD5 ] }
).match() }
)
}
}


test("Run with profile test | --input tests/csv/3.0/mapped_single_cram.csv --step markduplicates --tools null ") {
test("Run with profile test | --input tests/csv/3.0/mapped_single_cram.csv --step markduplicates --tools null") {

when {
params {
Expand Down Expand Up @@ -115,13 +115,13 @@ nextflow_pipeline {
// All files with stable contents
stable_path,
// All cram files
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).getReadsMD5() ] }
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).readsMD5 ] }
).match() }
)
}
}

test("Run with profile test | --input tests/csv/3.0/mapped_single_cram.csv --step markduplicates --skip_tools markduplicates --tools null ") {
test("Run with profile test | --input tests/csv/3.0/mapped_single_cram.csv --step markduplicates --skip_tools markduplicates --tools null") {

when {
params {
Expand Down Expand Up @@ -154,7 +154,7 @@ nextflow_pipeline {
// All files with stable contents
stable_path,
// All cram files
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).getReadsMD5() ] }
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).readsMD5 ] }
).match() }
)
}
Expand Down
20 changes: 9 additions & 11 deletions tests/start_from_markduplicates.nf.test.snap
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"Run with profile test | --input tests/csv/3.0/mapped_single_bam.csv --step markduplicates --skip_tools markduplicates --tools null ": {
"Run with profile test | --input tests/csv/3.0/mapped_single_bam.csv --step markduplicates --skip_tools markduplicates --tools null": {
"content": [
9,
{
Expand Down Expand Up @@ -114,17 +114,15 @@
"test.sorted.regions.bed.gz.csi:md5,c5d0be930ffc9e562f21519a0d488d5d",
"test.sorted.cram.stats:md5,a15b3a5e59337db312d66020c7bb93ac"
],
[

]
"No CRAM files"
],
"meta": {
"nf-test": "0.9.0",
"nextflow": "24.04.4"
"nf-test": "0.9.2",
"nextflow": "24.10.4"
},
"timestamp": "2024-11-09T16:15:20.865586"
"timestamp": "2025-02-25T11:06:19.105018276"
},
"Run with profile test | --input tests/csv/3.0/mapped_single_bam.csv --step markduplicates --tools null ": {
"Run with profile test | --input tests/csv/3.0/mapped_single_bam.csv --step markduplicates --tools null": {
"content": [
13,
{
Expand Down Expand Up @@ -303,7 +301,7 @@
},
"timestamp": "2024-11-09T16:12:33.604156"
},
"Run with profile test | --input tests/csv/3.0/mapped_single_cram.csv --step markduplicates --skip_tools markduplicates --tools null ": {
"Run with profile test | --input tests/csv/3.0/mapped_single_cram.csv --step markduplicates --skip_tools markduplicates --tools null": {
"content": [
12,
{
Expand Down Expand Up @@ -451,7 +449,7 @@
},
"timestamp": "2024-11-09T16:23:55.741166"
},
"Run with profile test | --input tests/csv/3.0/mapped_single_cram.csv --step markduplicates --tools null ": {
"Run with profile test | --input tests/csv/3.0/mapped_single_cram.csv --step markduplicates --tools null": {
"content": [
13,
{
Expand Down Expand Up @@ -630,4 +628,4 @@
},
"timestamp": "2024-11-09T16:18:25.238396"
}
}
}
24 changes: 8 additions & 16 deletions tests/start_from_preparerecalibration.nf.test
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ nextflow_pipeline {
tag "pipeline"
tag "pipeline_sarek"

test("Run with profile test | --input tests/csv/3.0/mapped_single_bam.csv --step prepare_recalibration --tools null ") {
test("Run with profile test | --input tests/csv/3.0/mapped_single_bam.csv --step prepare_recalibration --tools null") {

when {
params {
Expand All @@ -25,8 +25,6 @@ nextflow_pipeline {
// cram_files: All cram files
def cram_files = getAllFilesFromDir(params.outdir, include: ['**/*.cram'])
def fasta = params.modules_testdata_base_path + 'genomics/homo_sapiens/genome/genome.fasta'
// vcf_files: All vcf files
def vcf_files = getAllFilesFromDir(params.outdir, include: ['**/*.vcf.gz'])
assertAll(
{ assert workflow.success},
{ assert snapshot(
Expand All @@ -39,15 +37,13 @@ nextflow_pipeline {
// All files with stable contents
stable_path,
// All cram files
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).getReadsMD5() ] },
// All vcf files
vcf_files.collect{ file -> [ file.getName(), path(file.toString()).vcf.variantsMD5] }
cram_files.isEmpty() ? 'No CRAM files' : cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).readsMD5 ] }
).match() }
)
}
}

test("Run with profile test | --input tests/csv/3.0/mapped_single_bam.csv --step prepare_recalibration --skip_tools baserecalibrator --tools strelka ") {
test("Run with profile test | --input tests/csv/3.0/mapped_single_bam.csv --step prepare_recalibration --skip_tools baserecalibrator --tools strelka") {

when {
params {
Expand Down Expand Up @@ -82,7 +78,7 @@ nextflow_pipeline {
// All files with stable contents
stable_path,
// All cram files
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).getReadsMD5() ] },
cram_files.isEmpty() ? 'No CRAM files' : cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).readsMD5 ] },
// All vcf files
vcf_files.collect{ file -> [ file.getName(), path(file.toString()).vcf.variantsMD5] }
).match() }
Expand All @@ -91,7 +87,7 @@ nextflow_pipeline {
}


test("Run with profile test | --input tests/csv/3.0/mapped_single_cram.csv --step prepare_recalibration --tools null ") {
test("Run with profile test | --input tests/csv/3.0/mapped_single_cram.csv --step prepare_recalibration --tools null") {

when {
params {
Expand All @@ -111,8 +107,6 @@ nextflow_pipeline {
// cram_files: All cram files
def cram_files = getAllFilesFromDir(params.outdir, include: ['**/*.cram'])
def fasta = params.modules_testdata_base_path + 'genomics/homo_sapiens/genome/genome.fasta'
// vcf_files: All vcf files
def vcf_files = getAllFilesFromDir(params.outdir, include: ['**/*.vcf.gz'])
assertAll(
{ assert workflow.success},
{ assert snapshot(
Expand All @@ -125,15 +119,13 @@ nextflow_pipeline {
// All files with stable contents
stable_path,
// All cram files
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).getReadsMD5() ] },
// All vcf files
vcf_files.collect{ file -> [ file.getName(), path(file.toString()).vcf.variantsMD5] }
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).readsMD5 ] }
).match() }
)
}
}

test("Run with profile test | --input tests/csv/3.0/mapped_single_cram.csv --step prepare_recalibration --skip_tools baserecalibrator --tools strelka ") {
test("Run with profile test | --input tests/csv/3.0/mapped_single_cram.csv --step prepare_recalibration --skip_tools baserecalibrator --tools strelka") {

when {
params {
Expand Down Expand Up @@ -168,7 +160,7 @@ nextflow_pipeline {
// All files with stable contents
stable_path,
// All cram files
cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).getReadsMD5() ] },
cram_files.isEmpty() ? 'No CRAM files' : cram_files.collect{ file -> [ file.getName(), cram(file.toString(), fasta).readsMD5 ] },
// All vcf files
vcf_files.collect{ file -> [ file.getName(), path(file.toString()).vcf.variantsMD5] }
).match() }
Expand Down
Loading

0 comments on commit 0b9196a

Please sign in to comment.