Skip to content

Actions: vllm-project/vllm

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
159,026 workflow runs
159,026 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

[core] Pass all driver env vars to ray workers unless excluded (#14099)
pre-commit #5885: Commit bf13d40 pushed by youkaichao
March 4, 2025 03:44 In progress main
March 4, 2025 03:44 In progress
Push on main
CodeQL #2165: by youkaichao
March 4, 2025 03:44 In progress main
March 4, 2025 03:44 In progress
[DRAFT] Remove linear hack outside transformers backend
Lint and Deploy Charts #8793: Pull request #14177 opened by Isotr0py
March 4, 2025 03:41 In progress Isotr0py:linear-bias
March 4, 2025 03:41 In progress
[DRAFT] Remove linear hack outside transformers backend
pre-commit #5884: Pull request #14177 opened by Isotr0py
March 4, 2025 03:41 In progress Isotr0py:linear-bias
March 4, 2025 03:41 In progress
[DRAFT] Remove linear hack outside transformers backend
Cleanup PR Body #5204: Pull request #14177 opened by Isotr0py
March 4, 2025 03:41 16s
March 4, 2025 03:41 16s
[DRAFT] Remove linear hack outside transformers backend
PR Reminder Comment Bot #4110: Pull request #14177 opened by Isotr0py
March 4, 2025 03:41 11s
March 4, 2025 03:41 11s
[Bugfix] Fix allowed_token_ids for v1 Sampler
Cleanup PR Body #5203: Pull request #14169 edited by houseroad
March 4, 2025 03:35 18s
March 4, 2025 03:35 18s
[Bugfix] Fix allowed_token_ids for v1 Sampler
Cleanup PR Body #5202: Pull request #14169 edited by houseroad
March 4, 2025 03:34 16s
March 4, 2025 03:34 16s
[Bugfix] Fix allowed_token_ids for v1 Sampler
Lint and Deploy Charts #8792: Pull request #14169 synchronize by houseroad
March 4, 2025 03:30 6m 53s houseroad:fix_v1_allowed_token_ids
March 4, 2025 03:30 6m 53s
Fix enabling torch profiler in openai chat backend
Lint and Deploy Charts #8791: Pull request #14162 synchronize by huaqiangwang
March 4, 2025 03:21 6m 58s huaqiangwang:openai-chat-profile
March 4, 2025 03:21 6m 58s
Add CUDA kernel for per_token_group_quant_fp8
Cleanup PR Body #5201: Pull request #14175 opened by mgoin
March 4, 2025 03:14 13s
March 4, 2025 03:14 13s
Add CUDA kernel for per_token_group_quant_fp8
PR Reminder Comment Bot #4109: Pull request #14175 opened by mgoin
March 4, 2025 03:14 14s
March 4, 2025 03:14 14s
add cutlass support for blackwell fp8 gemm
Add label on auto-merge enabled #1805: Pull request #13798 auto_merge_enabled by tlrmchlsmth
March 4, 2025 03:13 10s
March 4, 2025 03:13 10s
[Misc] Remove lru_cache in NvmlCudaPlatform (#14156)
pre-commit #5880: Commit 989f4f4 pushed by youkaichao
March 4, 2025 03:09 4m 54s main
March 4, 2025 03:09 4m 54s
Push on main
CodeQL #2164: by youkaichao
March 4, 2025 03:09 2m 54s main
March 4, 2025 03:09 2m 54s
Fix enabling torch profiler in openai chat backend
Cleanup PR Body #5200: Pull request #14162 edited by huaqiangwang
March 4, 2025 03:04 13s
March 4, 2025 03:04 13s
Fix enabling torch profiler in openai chat backend
Lint and Deploy Charts #8789: Pull request #14162 synchronize by huaqiangwang
March 4, 2025 03:04 7m 33s huaqiangwang:openai-chat-profile
March 4, 2025 03:04 7m 33s
[CI] Make UT cases in test_comm_ops.py compatible with more devices.
Cleanup PR Body #5199: Pull request #14091 edited by wwfu109
March 4, 2025 03:04 17s
March 4, 2025 03:04 17s
Fix enabling torch profiler in openai chat backend
Cleanup PR Body #5198: Pull request #14162 edited by huaqiangwang
March 4, 2025 03:03 17s
March 4, 2025 03:03 17s
[CI] Make UT cases in test_comm_ops.py compatible with more devices.
Cleanup PR Body #5197: Pull request #14091 edited by wwfu109
March 4, 2025 03:02 13s
March 4, 2025 03:02 13s