Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge dev #255

Open
wants to merge 43 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
59ed866
Fix a bug in reuse_existing_container
arjunsuresh Feb 15, 2025
3e56970
[Automated Commit] Format Codebase [skip ci]
github-actions[bot] Feb 15, 2025
30b2d06
Fix dummy measurements file generation
arjunsuresh Feb 15, 2025
829b731
[Automated Commit] Format Codebase [skip ci]
github-actions[bot] Feb 15, 2025
26b1cab
Merge branch 'mlcommons:dev' into dev
arjunsuresh Feb 15, 2025
7d8a48d
Merge pull request #227 from GATEOverflow/dev
arjunsuresh Feb 15, 2025
bb34275
Prevent errors on get-platform-details
arjunsuresh Feb 15, 2025
1e9517b
Merge pull request #228 from GATEOverflow/dev
arjunsuresh Feb 15, 2025
e1293e9
Make noinfer-scenario results the default for mlperf-inference (#230)
arjunsuresh Feb 16, 2025
e98137e
Use inference dev branch for submission preprocess (#231)
arjunsuresh Feb 16, 2025
b1272b2
Fix mlcr usage in docs and actions (#232)
arjunsuresh Feb 16, 2025
9900cb4
Fix run-mobilenet (#233)
arjunsuresh Feb 16, 2025
9bbf7ce
Fixes to run-all scripts (#234)
arjunsuresh Feb 16, 2025
fd8269b
Fix for issue #236 (#237)
anandhu-eng Feb 17, 2025
d7e33d7
Refactored pointpainting model download script (#238)
anandhu-eng Feb 17, 2025
1d421fc
Add support for downloading waymo from mlcommons checkpoint (#235)
anandhu-eng Feb 17, 2025
dd6f2a8
Added get-aocc script (#240)
arjunsuresh Feb 17, 2025
b808608
Minor fixes to improve submission generation experience (#242)
arjunsuresh Feb 18, 2025
224cf56
Make full the default variation for retinanet dataset (#241)
anandhu-eng Feb 18, 2025
b5a4fd6
Fixes for mlperf inference submissions (#243)
arjunsuresh Feb 18, 2025
4bab9ad
Update meta.yaml (#244)
arjunsuresh Feb 18, 2025
af51c74
Exit condition provided for commit (#245)
anandhu-eng Feb 19, 2025
86f7e11
Fixes for mlperf submission (#249)
arjunsuresh Feb 20, 2025
de26510
Cleaned the boolean usage in MLCFlow (#246)
Sid9993 Feb 20, 2025
e8b6d47
Update build_wheel.yml
arjunsuresh Feb 21, 2025
9934345
Map rocm and gpu to cuda (#251)
anandhu-eng Feb 21, 2025
1c6fda2
Fixes get,igbh,dataset on host (#252)
arjunsuresh Feb 21, 2025
c7af0d3
Update customize.py | Fix boolean value for --compliance (#254)
arjunsuresh Feb 21, 2025
c594c15
Merge branch 'main' into dev
arjunsuresh Feb 21, 2025
3a9fcb1
Fix for no-cache in run-mobilenets (#256)
arjunsuresh Feb 22, 2025
1879acd
Added alternative download link for imagenet-aux (#257)
arjunsuresh Feb 22, 2025
f8e76e5
Code cleanup for mobilenet runs
arjunsuresh Feb 23, 2025
d8bf122
Make low disk usage the default in mobilenet run (#264)
arjunsuresh Feb 23, 2025
d8cae3a
Add script to download waymo calibration dataset (#265)
anandhu-eng Feb 24, 2025
3ea96dd
Fixes for mobilenet run (#266)
arjunsuresh Feb 24, 2025
5df666a
Update getting-started.md
arjunsuresh Feb 24, 2025
fd34624
Support mlperf inference submission tar file generation (#267)
arjunsuresh Feb 24, 2025
b964b0b
convert relative to abs file path (#270)
anandhu-eng Feb 24, 2025
231a219
Cleanup for run-mobilenet script (#272)
arjunsuresh Feb 25, 2025
72fbb7a
Added command to untar waymo dataset files (#274)
arjunsuresh Feb 25, 2025
7ec1c43
Support min_duration (#277)
arjunsuresh Feb 26, 2025
301de96
Cleanup mobilenet runs (#279)
arjunsuresh Feb 27, 2025
c8cb2c3
Update classification.cpp (#280)
arjunsuresh Feb 27, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .github/workflows/build_wheel.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,11 @@ on:

push:
branches:
- dev
- dev_off
paths:
- VERSION


jobs:

build_wheels:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,5 +22,5 @@ jobs:
export MLC_REPOS=$HOME/GH_MLC
pip install --upgrade mlc-scripts
mlc pull repo
mlcr --tags=run-mlperf,inference,_all-scenarios,_full,_r4.1-dev --execution_mode=valid --pull_changes=yes --pull_inference_changes=yes --model=${{ matrix.model }} --submitter="MLCommons" --hw_name=IntelSPR.24c --implementation=amd --backend=pytorch --category=datacenter --division=open --scenario=Offline --docker_dt=yes --docker_it=no --docker_mlc_repo=gateoverflow@mlperf-automations --docker_mlc_repo_branch=dev --adr.compiler.tags=gcc --device=rocm --use_dataset_from_host=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean --docker --quiet --docker_skip_run_cmd=yes
# mlcr --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_unofficial_submissions_v5.0 --repo_branch=dev --commit_message="Results from GH action on SPR.24c" --quiet --submission_dir=$HOME/gh_action_submissions --hw_name=IntelSPR.24c
mlcr run-mlperf,inference,_all-scenarios,_full,_r4.1-dev --execution_mode=valid --pull_changes=yes --pull_inference_changes=yes --model=${{ matrix.model }} --submitter="MLCommons" --hw_name=IntelSPR.24c --implementation=amd --backend=pytorch --category=datacenter --division=open --scenario=Offline --docker_dt=yes --docker_it=no --docker_mlc_repo=gateoverflow@mlperf-automations --docker_mlc_repo_branch=dev --adr.compiler.tags=gcc --device=rocm --use_dataset_from_host=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean --docker --quiet --docker_skip_run_cmd=yes
# mlcr push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_unofficial_submissions_v5.0 --repo_branch=dev --commit_message="Results from GH action on SPR.24c" --quiet --submission_dir=$HOME/gh_action_submissions --hw_name=IntelSPR.24c
2 changes: 1 addition & 1 deletion .github/workflows/test-image-classification-onnx.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,4 +38,4 @@ jobs:
mlc pull repo ${{ github.event.pull_request.head.repo.html_url }} --branch=${{ github.event.pull_request.head.ref }}
- name: Test image classification with ONNX
run: |
mlcr --tags=python,app,image-classification,onnx --quiet
mlcr python,app,image-classification,onnx --quiet
Original file line number Diff line number Diff line change
Expand Up @@ -22,5 +22,5 @@ jobs:
export MLC_REPOS=$HOME/GH_MLC
pip install --upgrade mlc-scripts
pip install tabulate
mlcr --tags=run-mlperf,inference,_all-scenarios,_submission,_full,_r4.1-dev --preprocess_submission=yes --execution_mode=valid --pull_changes=yes --pull_inference_changes=yes --model=${{ matrix.model }} --submitter="MLCommons" --hw_name=IntelSPR.24c --implementation=intel --backend=pytorch --category=datacenter --division=open --scenario=Offline --docker_dt=yes --docker_it=no --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --adr.compiler.tags=gcc --device=cpu --use_dataset_from_host=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean --docker --quiet
mlcr --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_unofficial_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from GH action on SPR.24c" --quiet --submission_dir=$HOME/gh_action_submissions --hw_name=IntelSPR.24c
mlcr run-mlperf,inference,_all-scenarios,_submission,_full,_r4.1-dev --preprocess_submission=yes --execution_mode=valid --pull_changes=yes --pull_inference_changes=yes --model=${{ matrix.model }} --submitter="MLCommons" --hw_name=IntelSPR.24c --implementation=intel --backend=pytorch --category=datacenter --division=open --scenario=Offline --docker_dt=yes --docker_it=no --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --adr.compiler.tags=gcc --device=cpu --use_dataset_from_host=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean --docker --quiet
mlcr push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_unofficial_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from GH action on SPR.24c" --quiet --submission_dir=$HOME/gh_action_submissions --hw_name=IntelSPR.24c
26 changes: 13 additions & 13 deletions .github/workflows/test-mlc-script-features.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,30 +35,30 @@ jobs:

- name: Test Python venv
run: |
mlcr --tags=install,python-venv --name=test --quiet
mlcr install,python-venv --name=test --quiet
mlc search cache --tags=get,python,virtual,name-test --quiet

- name: Test variations
run: |
mlcr --tags=get,dataset,preprocessed,imagenet,_NHWC --quiet
mlcr get,dataset,preprocessed,imagenet,_NHWC --quiet
mlc search cache --tags=get,dataset,preprocessed,imagenet,-_NCHW
mlc search cache --tags=get,dataset,preprocessed,imagenet,-_NHWC

- name: Test versions
continue-on-error: true
if: runner.os == 'linux'
run: |
mlcr --tags=get,generic-python-lib,_package.scipy --version=1.9.3 --quiet
mlcr get,generic-python-lib,_package.scipy --version=1.9.3 --quiet
test $? -eq 0 || exit $?
mlcr --tags=get,generic-python-lib,_package.scipy --version=1.9.2 --quiet
mlcr get,generic-python-lib,_package.scipy --version=1.9.2 --quiet
test $? -eq 0 || exit $?
# Need to add find cache here
# mlcr --tags=get,generic-python-lib,_package.scipy --version=1.9.3 --quiet --only_execute_from_cache=True
# mlcr get,generic-python-lib,_package.scipy --version=1.9.3 --quiet --only_execute_from_cache=True
# test $? -eq 0 || exit 0

- name: Test python install from src
run: |
mlcr --tags=python,src,install,_shared --version=3.9.10 --quiet
mlcr python,src,install,_shared --version=3.9.10 --quiet
mlc search cache --tags=python,src,install,_shared,version-3.9.10

test_docker:
Expand All @@ -81,11 +81,11 @@ jobs:

- name: Run docker container from dockerhub on linux
run: |
mlcr --tags=run,docker,container --adr.compiler.tags=gcc --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --image_name=cm-script-app-image-classification-onnx-py --env.MLC_DOCKER_RUN_SCRIPT_TAGS=app,image-classification,onnx,python --env.MLC_DOCKER_IMAGE_BASE=ubuntu:22.04 --env.MLC_DOCKER_IMAGE_REPO=cknowledge --quiet
mlcr run,docker,container --adr.compiler.tags=gcc --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --image_name=cm-script-app-image-classification-onnx-py --env.MLC_DOCKER_RUN_SCRIPT_TAGS=app,image-classification,onnx,python --env.MLC_DOCKER_IMAGE_BASE=ubuntu:22.04 --env.MLC_DOCKER_IMAGE_REPO=cknowledge --quiet

- name: Run docker container locally on linux
run: |
mlcr --tags=run,docker,container --adr.compiler.tags=gcc --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --image_name=mlc-script-app-image-classification-onnx-py --env.MLC_DOCKER_RUN_SCRIPT_TAGS=app,image-classification,onnx,python --env.MLC_DOCKER_IMAGE_BASE=ubuntu:22.04 --env.MLC_DOCKER_IMAGE_REPO=local --quiet
mlcr run,docker,container --adr.compiler.tags=gcc --docker_mlc_repo=mlcommons@mlperf-automations --docker_mlc_repo_branch=dev --image_name=mlc-script-app-image-classification-onnx-py --env.MLC_DOCKER_RUN_SCRIPT_TAGS=app,image-classification,onnx,python --env.MLC_DOCKER_IMAGE_BASE=ubuntu:22.04 --env.MLC_DOCKER_IMAGE_REPO=local --quiet

test_mlperf_retinanet_cpp_venv:
runs-on: ubuntu-latest
Expand All @@ -107,15 +107,15 @@ jobs:

- name: Run MLPerf Inference Retinanet with native and virtual Python
run: |
mlcr --tags=app,mlperf,inference,generic,_cpp,_retinanet,_onnxruntime,_cpu --adr.python.version_min=3.8 --adr.compiler.tags=gcc --adr.openimages-preprocessed.tags=_50 --scenario=Offline --mode=accuracy --test_query_count=10 --rerun --quiet
mlcr app,mlperf,inference,generic,_cpp,_retinanet,_onnxruntime,_cpu --adr.python.version_min=3.8 --adr.compiler.tags=gcc --adr.openimages-preprocessed.tags=_50 --scenario=Offline --mode=accuracy --test_query_count=10 --rerun --quiet

mlcr --tags=app,mlperf,inference,generic,_cpp,_retinanet,_onnxruntime,_cpu --adr.python.version_min=3.8 --adr.compiler.tags=gcc --adr.openimages-preprocessed.tags=_50 --scenario=Offline --mode=performance --test_query_count=10 --rerun --quiet
mlcr app,mlperf,inference,generic,_cpp,_retinanet,_onnxruntime,_cpu --adr.python.version_min=3.8 --adr.compiler.tags=gcc --adr.openimages-preprocessed.tags=_50 --scenario=Offline --mode=performance --test_query_count=10 --rerun --quiet

mlcr --tags=install,python-venv --version=3.10.8 --name=mlperf --quiet
mlcr install,python-venv --version=3.10.8 --name=mlperf --quiet

export MLC_SCRIPT_EXTRA_CMD="--adr.python.name=mlperf"

mlcr --tags=run,mlperf,inference,_submission,_short --adr.python.version_min=3.8 --adr.compiler.tags=gcc --adr.openimages-preprocessed.tags=_50 --submitter=MLCommons --implementation=cpp --hw_name=default --model=retinanet --backend=onnxruntime --device=cpu --scenario=Offline --quiet
mlcr run,mlperf,inference,_submission,_short --adr.python.version_min=3.8 --adr.compiler.tags=gcc --adr.openimages-preprocessed.tags=_50 --submitter=MLCommons --implementation=cpp --hw_name=default --model=retinanet --backend=onnxruntime --device=cpu --scenario=Offline --quiet

# Step for Linux/MacOS
- name: Randomly Execute Step (Linux/MacOS)
Expand Down Expand Up @@ -160,4 +160,4 @@ jobs:
git config --global credential.https://garden.eu.org.helper "!gh auth git-credential"
git config --global credential.https://gist.github.com.helper ""
git config --global credential.https://gist.github.com.helper "!gh auth git-credential"
mlcr --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from R50 GH action on ${{ matrix.os }}" --quiet
mlcr push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from R50 GH action on ${{ matrix.os }}" --quiet
2 changes: 1 addition & 1 deletion .github/workflows/test-mlperf-inference-abtf-poc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -114,4 +114,4 @@ jobs:

- name: Test MLPerf Inference ABTF POC using ${{ matrix.backend }} on ${{ matrix.os }}
run: |
mlcr --tags=run-abtf,inference,_poc-demo --test_query_count=2 --adr.cocoeval.version_max=1.5.7 --adr.cocoeval.version_max_usable=1.5.7 --quiet ${{ matrix.extra-args }} ${{ matrix.docker }} -v
mlcr run-abtf,inference,_poc-demo --test_query_count=2 --adr.cocoeval.version_max=1.5.7 --adr.cocoeval.version_max_usable=1.5.7 --quiet ${{ matrix.extra-args }} ${{ matrix.docker }} -v
Original file line number Diff line number Diff line change
Expand Up @@ -43,11 +43,11 @@ jobs:
- name: Test MLPerf Inference Bert ${{ matrix.backend }} on ${{ matrix.os }}
if: matrix.os == 'windows-latest'
run: |
mlcr --tags=run,mlperf,inference,generate-run-cmds,_submission,_short --submitter="MLCommons" --hw_name=gh_${{ matrix.os }} --model=bert-99 --backend=${{ matrix.backend }} --device=cpu --scenario=Offline --test_query_count=5 --adr.loadgen.tags=_from-pip --pip_loadgen=yes --precision=${{ matrix.precision }} --target_qps=1 -v --quiet
mlcr run,mlperf,inference,generate-run-cmds,_submission,_short --submitter="MLCommons" --hw_name=gh_${{ matrix.os }} --model=bert-99 --backend=${{ matrix.backend }} --device=cpu --scenario=Offline --test_query_count=5 --adr.loadgen.tags=_from-pip --pip_loadgen=yes --precision=${{ matrix.precision }} --target_qps=1 -v --quiet
- name: Test MLPerf Inference Bert ${{ matrix.backend }} on ${{ matrix.os }}
if: matrix.os != 'windows-latest'
run: |
mlcr --tags=run,mlperf,inference,generate-run-cmds,_submission,_short --submitter="MLCommons" --hw_name=gh_${{ matrix.os }}_x86 --model=bert-99 --backend=${{ matrix.backend }} --device=cpu --scenario=Offline --test_query_count=5 --precision=${{ matrix.precision }} --target_qps=1 -v --quiet
mlcr run,mlperf,inference,generate-run-cmds,_submission,_short --submitter="MLCommons" --hw_name=gh_${{ matrix.os }}_x86 --model=bert-99 --backend=${{ matrix.backend }} --device=cpu --scenario=Offline --test_query_count=5 --precision=${{ matrix.precision }} --target_qps=1 -v --quiet
- name: Randomly Execute Step
id: random-check
run: |
Expand Down Expand Up @@ -77,4 +77,4 @@ jobs:
git config --global credential.https://garden.eu.org.helper "!gh auth git-credential"
git config --global credential.https://gist.github.com.helper ""
git config --global credential.https://gist.github.com.helper "!gh auth git-credential"
mlcr --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from R50 GH action on ${{ matrix.os }}" --quiet
mlcr push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from R50 GH action on ${{ matrix.os }}" --quiet
4 changes: 2 additions & 2 deletions .github/workflows/test-mlperf-inference-dlrm.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ jobs:
source gh_action/bin/activate
export MLC_REPOS=$HOME/GH_MLC
python3 -m pip install mlperf
mlcr --tags=run-mlperf,inference,_performance-only --pull_changes=yes --pull_inference_changes=yes --submitter="MLCommons" --model=dlrm-v2-99 --implementation=reference --backend=pytorch --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=1 --docker_it=no --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --clean
mlcr run-mlperf,inference,_performance-only --pull_changes=yes --pull_inference_changes=yes --submitter="MLCommons" --model=dlrm-v2-99 --implementation=reference --backend=pytorch --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=1 --docker_it=no --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --clean

build_intel:
if: github.repository_owner == 'gateoverflow_off'
Expand All @@ -44,4 +44,4 @@ jobs:
export MLC_REPOS=$HOME/GH_MLC
python3 -m pip install mlperf
mlc pull repo
mlcr --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --model=dlrm-v2-99 --implementation=intel --batch_size=1 --backend=${{ matrix.backend }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=1 --docker_it=no --docker_mlc_repo=gateoverflow@mlperf-automations --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean
mlcr run-mlperf,inference,_submission,_short --submitter="MLCommons" --model=dlrm-v2-99 --implementation=intel --batch_size=1 --backend=${{ matrix.backend }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=1 --docker_it=no --docker_mlc_repo=gateoverflow@mlperf-automations --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --clean
6 changes: 3 additions & 3 deletions .github/workflows/test-mlperf-inference-gptj.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ name: MLPerf inference GPT-J

on:
schedule:
- cron: "15 19 * * *"
- cron: "15 19 1 * *"

jobs:
build:
Expand All @@ -26,6 +26,6 @@ jobs:
export MLC_REPOS=$HOME/GH_MLC
python3 -m pip install --upgrade mlc-scripts
mlc pull repo
mlcr --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --docker --pull_changes=yes --pull_inference_changes=yes --model=gptj-99 --backend=${{ matrix.backend }} --device=cuda --scenario=Offline --test_query_count=1 --precision=${{ matrix.precision }} --target_qps=1 --quiet --docker_it=no --docker_mlc_repo=gateoverflow@mlperf-automations --docker_mlc_repo_branch=dev --adr.compiler.tags=gcc --beam_size=1 --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --get_platform_details=yes --implementation=reference --clean
mlcr --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from self hosted Github actions - NVIDIARTX4090" --quiet --submission_dir=$HOME/gh_action_submissions
mlcr run-mlperf,inference,_submission,_short --submitter="MLCommons" --docker --pull_changes=yes --pull_inference_changes=yes --model=gptj-99 --backend=${{ matrix.backend }} --device=cuda --scenario=Offline --test_query_count=1 --precision=${{ matrix.precision }} --target_qps=1 --quiet --docker_it=no --docker_mlc_repo=gateoverflow@mlperf-automations --docker_mlc_repo_branch=dev --adr.compiler.tags=gcc --beam_size=1 --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --get_platform_details=yes --implementation=reference --clean
mlcr push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from self hosted Github actions - NVIDIARTX4090" --quiet --submission_dir=$HOME/gh_action_submissions

6 changes: 3 additions & 3 deletions .github/workflows/test-mlperf-inference-llama2.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ name: MLPerf inference LLAMA2-70B

on:
schedule:
- cron: "59 04 * * *"
- cron: "59 04 1 * *"

jobs:
build_reference:
Expand All @@ -31,5 +31,5 @@ jobs:
pip install "huggingface_hub[cli]"
git config --global credential.helper store
huggingface-cli login --token ${{ secrets.HF_TOKEN }} --add-to-git-credential
mlcr --tags=run-mlperf,inference,_submission,_short --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --model=llama2-70b-99 --implementation=reference --backend=${{ matrix.backend }} --precision=${{ matrix.precision }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=0.001 --docker_it=no --docker_mlc_repo=gateoverflow@mlperf-automations --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --env.MLC_MLPERF_MODEL_LLAMA2_70B_DOWNLOAD_TO_HOST=yes --clean
mlcr --tags=push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from self hosted Github actions" --quiet --submission_dir=$HOME/gh_action_submissions
mlcr run-mlperf,inference,_submission,_short --submitter="MLCommons" --pull_changes=yes --pull_inference_changes=yes --model=llama2-70b-99 --implementation=reference --backend=${{ matrix.backend }} --precision=${{ matrix.precision }} --category=datacenter --scenario=Offline --execution_mode=test --device=${{ matrix.device }} --docker --quiet --test_query_count=1 --target_qps=0.001 --docker_it=no --docker_mlc_repo=gateoverflow@mlperf-automations --adr.compiler.tags=gcc --hw_name=gh_action --docker_dt=yes --results_dir=$HOME/gh_action_results --submission_dir=$HOME/gh_action_submissions --env.MLC_MLPERF_MODEL_LLAMA2_70B_DOWNLOAD_TO_HOST=yes --clean
mlcr push,github,mlperf,inference,submission --repo_url=https://github.com/mlcommons/mlperf_inference_test_submissions_v5.0 --repo_branch=auto-update --commit_message="Results from self hosted Github actions" --quiet --submission_dir=$HOME/gh_action_submissions
Loading
Loading