-
Notifications
You must be signed in to change notification settings - Fork 361
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nccl ops correction changes #3387
base: main
Are you sure you want to change the base?
Conversation
# transpose key deleted since not desirable to lower it to permute | ||
to_delete = { | ||
key | ||
for key in settings_aot_autograd["decompositions"] | ||
if "detach" in key._name or "transpose" in key._name | ||
} | ||
|
||
for key in to_delete: | ||
del settings_aot_autograd["decompositions"][key] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have a remove_detach
lowering pass. Can that help here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is not helping here, because the graph explicitly does not have detach ops to remove the nodes. Instead it encounters this in https://github.com/pytorch/pytorch/blob/main/torch/_decomp/decompositions.py#L2153. This might be due to the %hook_result_3 = call_function[target=torch._dynamo.variables.tensor.prim_to_local](args = (%outputs_3,), kwargs = {})
where it gets the DTensor to local tensor and might need to detach the distributed operation.
if aten.detach in settings_aot_autograd["decompositions"]: | ||
del settings_aot_autograd["decompositions"][aten.detach] | ||
# transpose key deleted since not desirable to lower it to permute | ||
to_delete = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this apply to all cases not just NCCL?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean in the non distributed example? I am not sure about that answer, I added this for the llama3 example since I was issues in the model lowering and it was generating graph breaks at the wrong part, leading to complex input error. It can be added to all cases in case if we want to not lower transpose to permute.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the discussion
-
detach:
remove_detach
does not help since the graph explicitly does not have detach ops to remove the nodes. Instead it encounters this in https://github.com/pytorch/pytorch/blob/main/torch/_decomp/decompositions.py#L2153. This might be due to the%hook_result_3 = call_function[target=torch._dynamo.variables.tensor.prim_to_local](args = (%outputs_3,), kwargs = {})
where it moves the DTensor to local tensor and needs to detach the distributed operation. This is intensor_parallel_simple_example.py
-
transpose:
transpose
is more for tackling thetensor_parallel_llama3.py
. The broad modification I did to handle the complex nos, are:
a. Modifying the placeholder node shape and type
b. Modifying the inputs to the reshape and slice ops with complex inputs
c. Replace the complex tensorrt mul
I see that if I decompose transpose to permute, the graph in gpu_0, has output of complex tensor mul as complex64 or complex 128 which goes as input to acc_* graph causing complex input error. Transpose being in the graph helps in it handling the complex input in gpu_0 graph partition only.
Regarding the discussion, would removal of transpose from decomposition affect the result- I would think no, since this is not removal of op like detach, but instead it is just that we do not lower it to permute. But you could provide me more insights if not
- Also if we would want it to be model specific and not apply to all models, I think it can be the next step to include it either in the UI or something like we do in the non distributed models, including in the
torch_disabled_decomposition
dictionary which applies to all model. Specifying UI, since we are talking about model specific disabled decomposition. As of now since this part of code applies to only distributed model, it should be good to go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/distributed/test_nccl_ops.py 2025-02-28 04:08:09.204507+00:00
+++ /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/distributed/test_nccl_ops.py 2025-02-28 04:08:34.494240+00:00
@@ -23,11 +23,11 @@
from conversion.harness import DispatchTestCase
class TestGatherNcclOpsConverter(DispatchTestCase):
- @parameterized.expand([(8)])
+ @parameterized.expand([8])
def test_nccl_ops(self, linear_layer_dim):
class DistributedGatherModel(nn.Module):
def __init__(self, input_dim):
super().__init__()
self.fc = torch.nn.Linear(input_dim, input_dim)
@@ -48,11 +48,11 @@
inputs,
use_dynamo_tracer=True,
fuse_distributed_ops=True,
)
- @parameterized.expand([(8)])
+ @parameterized.expand([8])
def test_nccl_ops_scatter(self, linear_layer_dim):
class DistributedReduceScatterModel(nn.Module):
def __init__(self, input_dim):
super().__init__()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/distributed/test_nccl_ops.py 2025-02-28 19:22:33.272875+00:00
+++ /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/distributed/test_nccl_ops.py 2025-02-28 19:23:00.029705+00:00
@@ -15,11 +15,11 @@
from conversion.harness import DispatchTestCase
class TestGatherNcclOpsConverter(DispatchTestCase):
- @parameterized.expand([(8)])
+ @parameterized.expand([8])
def test_nccl_ops(self, linear_layer_dim):
class DistributedGatherModel(nn.Module):
def __init__(self, input_dim):
super().__init__()
self.fc = torch.nn.Linear(input_dim, input_dim)
@@ -40,11 +40,11 @@
inputs,
use_dynamo_tracer=True,
fuse_distributed_ops=True,
)
- @parameterized.expand([(8)])
+ @parameterized.expand([8])
def test_nccl_ops_scatter(self, linear_layer_dim):
class DistributedReduceScatterModel(nn.Module):
def __init__(self, input_dim):
super().__init__()
091c83f
to
67b970e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/distributed/test_nccl_ops.py 2025-02-28 19:24:59.719676+00:00
+++ /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/distributed/test_nccl_ops.py 2025-02-28 19:25:27.751500+00:00
@@ -15,11 +15,11 @@
from conversion.harness import DispatchTestCase
class TestGatherNcclOpsConverter(DispatchTestCase):
- @parameterized.expand([(8)])
+ @parameterized.expand([8])
def test_nccl_ops(self, linear_layer_dim):
class DistributedGatherModel(nn.Module):
def __init__(self, input_dim):
super().__init__()
self.fc = torch.nn.Linear(input_dim, input_dim)
@@ -40,11 +40,11 @@
inputs,
use_dynamo_tracer=True,
fuse_distributed_ops=True,
)
- @parameterized.expand([(8)])
+ @parameterized.expand([8])
def test_nccl_ops_scatter(self, linear_layer_dim):
class DistributedReduceScatterModel(nn.Module):
def __init__(self, input_dim):
super().__init__()
@@ -364,6 +364,15 @@ def forward(self, *inputs: torch.Tensor) -> torch.Tensor | Tuple[torch.Tensor, . | |||
(i.contiguous() if isinstance(i, torch.Tensor) else torch.tensor(i).cuda()) | |||
for i in inputs | |||
] | |||
|
|||
for i, contiguous_input in enumerate(contiguous_inputs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the C++ API not need these changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not clear on this aspect. Could you please let me know what would be required as part of this. I am running distributed python example and did not encounter the requirement for this.
os.environ["MASTER_ADDR"] = "127.0.0.1" | ||
os.environ["MASTER_PORT"] = str(port) | ||
os.environ["TRTLLM_PLUGINS_PATH"] = ( | ||
find_repo_root() + "/lib/libnvinfer_plugin_tensorrt_llm.so" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assume a tmp path instead of some arbitrary location in the source tree
@@ -351,6 +351,7 @@ def generate_graph( | |||
enable_passes: bool, | |||
propagate_shapes: bool = False, | |||
settings: CompilationSettings = CompilationSettings(), | |||
fuse_distributed_ops: bool = False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need this setting in the harness? Since it only applies to distributed, you should 1. have a test for the pass specifically, 2. have a test for converting the custom op without a pass and 3. have a test that applies the pass then converts the custom op
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had the same doubt actually.
- I can look for the test for pass
- Without a pass, it is not possible to encounter the custom op.
gathered_tensor = torch.ops._c10d_functional.all_gather_into_tensor(x, world_size, group_name)
gathered_tensor = torch.ops._c10d_functional.wait_tensor(gathered_tensor)
together form the nccl_ops code, and for that the pass is required
- That is what is done right now. Maybe applying the pass in harness is not the right place. Where would you suggest it to be? Since we want to test the converters only, can't harness have a specific option saying
distributed=True?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mpirun -n 1 --allow-run-as-root python test_distributed_simple_example.py
ideally covers all of it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually just putting enable_passes=True
, works in the converter test. Earlier the constant folding was giving an error, but that is resolved.
|
||
OS="$(uname -s)" | ||
ARCH="$(uname -m)" | ||
PYTHON_VERSION="$(python3 -c 'import sys; print(f"cp{sys.version_info.major}{sys.version_info.minor}")')" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does it matter what python version the system is using? Can we just use the same .so for all python versions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/distributed/test_nccl_ops.py 2025-03-03 16:25:48.020598+00:00
+++ /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/distributed/test_nccl_ops.py 2025-03-03 16:26:14.671617+00:00
@@ -15,11 +15,11 @@
from conversion.harness import DispatchTestCase
class TestGatherNcclOpsConverter(DispatchTestCase):
- @parameterized.expand([(8)])
+ @parameterized.expand([8])
def test_nccl_ops(self, linear_layer_dim):
class DistributedGatherModel(nn.Module):
def __init__(self, input_dim):
super().__init__()
self.fc = torch.nn.Linear(input_dim, input_dim)
@@ -40,11 +40,11 @@
inputs,
use_dynamo_tracer=True,
fuse_distributed_ops=True,
)
- @parameterized.expand([(8)])
+ @parameterized.expand([8])
def test_nccl_ops_scatter(self, linear_layer_dim):
class DistributedReduceScatterModel(nn.Module):
def __init__(self, input_dim):
super().__init__()
I am not sure why lint is showing this
I see no error in local pre-commit |
0f6966f
to
ca8478a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/distributed/test_nccl_ops.py 2025-03-03 17:06:44.889577+00:00
+++ /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/distributed/test_nccl_ops.py 2025-03-03 17:07:10.979519+00:00
@@ -15,11 +15,11 @@
from conversion.harness import DispatchTestCase
class TestGatherNcclOpsConverter(DispatchTestCase):
- @parameterized.expand([(8)])
+ @parameterized.expand([8])
def test_nccl_ops(self, linear_layer_dim):
class DistributedGatherModel(nn.Module):
def __init__(self, input_dim):
super().__init__()
self.fc = torch.nn.Linear(input_dim, input_dim)
@@ -40,11 +40,11 @@
inputs,
use_dynamo_tracer=True,
fuse_distributed_ops=True,
)
- @parameterized.expand([(8)])
+ @parameterized.expand([8])
def test_nccl_ops_scatter(self, linear_layer_dim):
class DistributedReduceScatterModel(nn.Module):
def __init__(self, input_dim):
super().__init__()
ca8478a
to
4b79bfb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/distributed/test_nccl_ops.py 2025-03-03 18:15:05.376989+00:00
+++ /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/distributed/test_nccl_ops.py 2025-03-03 18:15:31.109367+00:00
@@ -15,11 +15,11 @@
from conversion.harness import DispatchTestCase
class TestGatherNcclOpsConverter(DispatchTestCase):
- @parameterized.expand([(8)])
+ @parameterized.expand([8])
def test_nccl_ops(self, linear_layer_dim):
class DistributedGatherModel(nn.Module):
def __init__(self, input_dim):
super().__init__()
self.fc = torch.nn.Linear(input_dim, input_dim)
@@ -40,11 +40,11 @@
inputs,
use_dynamo_tracer=True,
enable_passes=True,
)
- @parameterized.expand([(8)])
+ @parameterized.expand([8])
def test_nccl_ops_scatter(self, linear_layer_dim):
class DistributedReduceScatterModel(nn.Module):
def __init__(self, input_dim):
super().__init__()
No description provided.