Skip to content

Actions: NVIDIA/TensorRT

Actions

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
4,211 workflow runs
4,211 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Failure of TensorRT 10.7 to eliminate concatenation with upstream custom layer
Blossom-CI #6984: Issue comment #4345 (comment) created by linc-nv
February 26, 2025 00:29 4s
February 26, 2025 00:29 4s
February 25, 2025 16:46 6s
February 25, 2025 09:20 5s
Failure of TensorRT 10.7 to eliminate concatenation with upstream custom layer
Blossom-CI #6980: Issue comment #4345 (comment) created by jchia
February 25, 2025 03:35 5s
February 25, 2025 03:35 5s
Unet results wrong of TensorRT 10.x when running on GPU L40s
Blossom-CI #6979: Issue comment #4351 (comment) created by Fans0014
February 25, 2025 03:20 5s
February 25, 2025 03:20 5s
Failure of TensorRT 10.7 to eliminate concatenation with upstream custom layer
Blossom-CI #6978: Issue comment #4345 (comment) created by linc-nv
February 24, 2025 21:36 4s
February 24, 2025 21:36 4s
[Feature request] allow uint8 output without an ICastLayer before
Blossom-CI #6977: Issue comment #4282 (comment) created by linc-nv
February 24, 2025 21:29 5s
February 24, 2025 21:29 5s
Given an engine file, how to know what GPU model it is generated on?
Blossom-CI #6976: Issue comment #4233 (comment) created by linc-nv
February 24, 2025 20:34 6s
February 24, 2025 20:34 6s
Tensor Parallel and Context Parallel
Blossom-CI #6975: Issue comment #4231 (comment) created by linc-nv
February 24, 2025 20:27 5s
February 24, 2025 20:27 5s
UINT8-to-FLOAT cast after transpose breaks the graph.
Blossom-CI #6973: Issue comment #3985 (comment) created by linc-nv
February 24, 2025 20:14 5s
February 24, 2025 20:14 5s
Unet results wrong of TensorRT 10.x when running on GPU L40s
Blossom-CI #6972: Issue comment #4351 (comment) created by Fans0014
February 24, 2025 02:25 4s
February 24, 2025 02:25 4s
BF16 is slower than fp16 of TensorRT 9.1 when running my R50 model on A800 GPU
Blossom-CI #6971: Issue comment #3583 (comment) created by JamePeng
February 23, 2025 15:29 4s
February 23, 2025 15:29 4s
How to infer on a Torch Tensor on GPU and return same Torch Tensor on GPU
Blossom-CI #6970: Issue comment #2506 (comment) created by finlay-hudson
February 22, 2025 16:21 4s
February 22, 2025 16:21 4s
executeV2: Error Code 1: Cask (Cask Pooling Runner Execute Failure)
Blossom-CI #6969: Issue comment #4055 (comment) created by loryruta
February 21, 2025 22:08 5s
February 21, 2025 22:08 5s
Why TensorRT use Convolution instead MatMul in explicit quantized model
Blossom-CI #6968: Issue comment #3266 (comment) created by vadimkantorov
February 20, 2025 10:04 6s
February 20, 2025 10:04 6s
Why TensorRT use Convolution instead MatMul in explicit quantized model
Blossom-CI #6967: Issue comment #3266 (comment) created by phantaurus
February 20, 2025 00:21 5s
February 20, 2025 00:21 5s
Unet results wrong of TensorRT 10.x when running on GPU L40s
Blossom-CI #6966: Issue comment #4351 (comment) created by Fans0014
February 19, 2025 11:43 5s
February 19, 2025 11:43 5s
TensorRT‌ 10.3 wrong results!
Blossom-CI #6962: Issue comment #4330 (comment) created by OctaAIVision
February 18, 2025 14:52 6s
February 18, 2025 14:52 6s
Unable to load engine in c++ api
Blossom-CI #6961: Issue comment #4339 (comment) created by ninono12345
February 16, 2025 11:49 5s
February 16, 2025 11:49 5s