You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that in TensorRT 10.7, there is no tensorrt_bindings.tensorrt.ICudaEngine.max_batch_size API. Are there any other APIs that can replace this one?
The text was updated successfully, but these errors were encountered:
@yansiyu550 Beginning with newer versions of TensorRT (often referred to as the “explicit batch” API), the batch size is treated as an explicit dimension of the input tensor. For instance, if your network input originally has the dimensions [H, W, C] (height, width, channels), in TensorRT with an explicit batch size it becomes [B, H, W, C], where B is the batch dimension.
To retrieve the shape of any input tensor (including the batch size), you can use the engine.get_tensor_shape API. The shape it returns will include this batch dimension as the first element.
When working with dynamic shapes, note that the maximum allowed input dimensions are constrained by the optimization profile configured in the engine. Each optimization profile specifies minimum, optimal, and maximum dimensions for each input. For details on setting up and using optimization profiles, refer to the TensorRT documentation on dynamic shapes and optimization profiles.
It seems that in TensorRT 10.7, there is no tensorrt_bindings.tensorrt.ICudaEngine.max_batch_size API. Are there any other APIs that can replace this one?
The text was updated successfully, but these errors were encountered: