Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replicate RAJAPerf Experiments #599

Draft
wants to merge 4 commits into
base: develop
Choose a base branch
from

Conversation

michaelmckinsey1
Copy link
Collaborator

@michaelmckinsey1 michaelmckinsey1 commented Feb 8, 2025

Description

Replicate recent RAJAPerf experiments that were built with cmake.

This PR dovetails with LLNL/RAJAPerf#501. @daboehme I couldn't tag you there, but please help us get both RAJAPerf and Benchpark into a better state, and so they can work together.

  • Enable passing additional command line arguments to raja-perf binary
  • Caliper configs we need to be able to use:
    • time
    • topdown
    • gpu-side calls (name?)
    • ncu
  • Replicate Datasets:
    • V100
    • MI250x
    • MI300a
    • A100
    • GH200

Adding/modifying a benchmark (docs: Adding a Benchmark)

  • Modify experiments/raja-perf/experiment.py
  • Modify repo/raja-perf/application.py

V100 setup reproducer

benchpark system init --dest=lassen-system llnl-sierra
benchpark experiment init --dest=raja-perf-benchmark raja-perf+cuda+strong~single_node caliper=time version=2024.07.0 scaling-factor=4 scaling-iterations=2 total_size=33554432 repfact=5
benchpark setup ./raja-perf-benchmark/ ./lassen-system/ v100-raja-perf-2025/

image
(text values shown on figure are raw values, not speedup)

@github-actions github-actions bot added experiment New or modified experiment application labels Feb 8, 2025
@michaelmckinsey1 michaelmckinsey1 self-assigned this Feb 8, 2025
Comment on lines +44 to +61
# executable(
# "run",
# template=["{execute}" + " --size {size}" + " --repfact {repfact}" + " {additional_args}"],
# use_mpi=True
# )

workload("suite", executables=["run"])

# workload_variable(
# "additional_args",
# default="--variants {variants} --tunings {tunings}",
# workloads=["suite"],
# description="",
# )

# workload_variable("variants", default="", workloads=["suite"], description="")

executable('run', 'raja-perf.exe', use_mpi=True)
# workload_variable("tunings", default="", workloads=["suite"], description="")
Copy link
Collaborator Author

@michaelmckinsey1 michaelmckinsey1 Feb 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to have the option for --variants and --tunings instead of having to manually edit the execute_experiment files. The problem with this code is that when no argument is provided, the arguments are still injected, e.g. raja-perf.exe --repfact 5 ... --variants --tunings, when ideally they should not appear if not provided raja-perf.exe --repfact 5 ...

This can be done by checking in experiment.py, but that code results in the arguments appearing into experiment_run_dir (e.g. ...raja-perf.exe/raja-perf_suite_strong_1_33554432_5 --variants RAJA_CUDA --tunings block_256 ...) since execute is an experiment variable.

@pearce8 pearce8 requested a review from daboehme February 19, 2025 01:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
application experiment New or modified experiment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant