Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enabled configurable auto Tensor Parallelism (TP) for the inference of diverse models #6553

Open
wants to merge 51 commits into
base: master
Choose a base branch
from

Conversation

gyou2021
Copy link
Contributor

@gyou2021 gyou2021 commented Sep 18, 2024

Auto TP in auto_tp.py handles linear type modules in emerging complex models. 1) The result of some linear modules in a model should operate all reduce operation after running on multiple HPU/GPU cards; The name of those linear modules may be different from those in the method tp_parser(). 2) The weight of some linear modules in a model CANNOT be split to multiple HPU/GPU cards; 3) The weight of some linear modules in a model should NOT be split to multiple HPU/GPU cards to avoid decreasing performance because of afterward all gather operation (gather result from all cards). In case 1) the Linear type should change to LinearAllreduce type in DeepSpeed. In cases 2) and 3) the linear modules should keep Linear type. A configurable auto TP was proposed to handle those cases easily. The method tp_parser() will add the linear modules in case 1) (Here module name list was stored in the environment variable 'DS_ALL_REDUCE_LINEAR_ITEMS') and the method _replace() will add the linear modules in case 2) and 3) (Here module name list was stored in the environment variable 'DS_KEEP_LINEAR_ITEMS'). Those environment variables are configurable. They can be configured directly in environment variables or a configuration file.

Take the Mixtral 8x7B model as an example:
We will add 'w2' to LinearAllreduce, and keep 'gate' as Linear. 'o_proj' is the default deepspeed LinearAllreduce layer.
Add the following into some main code.
import os
os.environ["DS_ALL_REDUCE_LINEAR_ITEMS"] = "{'w2':'mixtral'}"
os.environ["DS_KEEP_LINEAR_ITEMS"] = "{'gate':'mixtral'}"

Origin Mixtral model:
image

Mixtral model of auto_TP:
image

@delock
Copy link
Collaborator

delock commented Sep 19, 2024

Hi @gyou2021 I like the goal to avoid repetition of same logic from L296 to L315, but I also have concern that models enabled by these lines will not be able to run out-of-box with this PR. This may not be friendly to self-helping users without access to proper BKC documentation to various models.

Could allReduceLinearItems have an initial value as a built-in list, then pre-pend with os.environment to get runtime configurability? I think if the model to be enabled by environment is a public model, it should be contributed to the built-in list to provide OOB experience, right?

@gyou2021

This comment was marked as resolved.

@loadams
Copy link
Collaborator

loadams commented Jan 8, 2025

Hi @delock and @gyou2021 - what more needs to be done to complete this PR? Just a review/approval? Any other changes?

@loadams loadams self-assigned this Jan 13, 2025
@delock
Copy link
Collaborator

delock commented Jan 14, 2025

@loadams let me check with gyou on this PR status.

@gyou2021
Copy link
Contributor Author

Sure. I updated the code to enable it to run out-of-box. Thank you for your comments.

Hi @gyou2021 I like the goal to avoid repetition of same logic from L296 to L315, but I also have concern that models enabled by these lines will not be able to run out-of-box with this PR. This may not be friendly to self-helping users without access to proper BKC documentation to various models.

Could allReduceLinearItems have an initial value as a built-in list, then pre-pend with os.environment to get runtime configurability? I think if the model to be enabled by environment is a public model, it should be contributed to the built-in list to provide OOB experience, right?

@delock
Copy link
Collaborator

delock commented Jan 21, 2025

@loadams my questions are all resolved and I have no further question to @gyou2021 , thanks!

Yejing-Lai and others added 7 commits February 18, 2025 08:46
Add lm_head tp support when checkpoint not provided to
deepspeed.init_inference().

---------

Co-authored-by: Logan Adams <[email protected]>
Co-authored-by: Ma, Guokai <[email protected]>
Signed-off-by: gyou2021 <[email protected]>
in some scenarios some of the optimization
flags for the ops compiler for HPU can cause
a significant performance degradation.
remove the flags until the issue is resolved

Signed-off-by: gyou2021 <[email protected]>
Breaking change in transformers is
huggingface/transformers#35235. Need to make
changes to unpin nv-a6000 workflow.

Signed-off-by: gyou2021 <[email protected]>
Using keep_module_on_host config var will let us control if the loaded
checkpoints to model parameters will be moved to the device or stay on
host

---------

Co-authored-by: Olatunji Ruwase <[email protected]>
Co-authored-by: Logan Adams <[email protected]>
Signed-off-by: gyou2021 <[email protected]>
`warn` is deprecated, see
https://docs.python.org/3/library/logging.html#logging.Logger.warning

```DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead```

Signed-off-by: gyou2021 <[email protected]>
)

**Summary**
This PR adds `extra_repr` method to some Linear classes so that
additional info is printed when printing such modules. It is useful for
debugging.
Affected modules:
- LinearLayer
- LinearAllreduce
- LmHeadLinearAllreduce

The `extra_repr` method gives the following info:
- in_features
- out_features
- bias (true or false)
- dtype

**Example**
Print llama-2-7b model on rank 0 after `init_inference` with world size
= 2.
Previously we only got class names of these modules:
```
InferenceEngine(
  (module): LlamaForCausalLM(
    (model): LlamaModel(
      (embed_tokens): Embedding(32000, 4096)
      (layers): ModuleList(
        (0-31): 32 x LlamaDecoderLayer(
          (self_attn): LlamaSdpaAttention(
            (q_proj): LinearLayer()
            (k_proj): LinearLayer()
            (v_proj): LinearLayer()
            (o_proj): LinearAllreduce()
            (rotary_emb): LlamaRotaryEmbedding()
          )
          (mlp): LlamaMLP(
            (gate_proj): LinearLayer()
            (up_proj): LinearLayer()
            (down_proj): LinearAllreduce()
            (act_fn): SiLU()
          )
          (input_layernorm): LlamaRMSNorm((4096,), eps=1e-05)
          (post_attention_layernorm): LlamaRMSNorm((4096,), eps=1e-05)
        )
      )
      (norm): LlamaRMSNorm((4096,), eps=1e-05)
      (rotary_emb): LlamaRotaryEmbedding()
    )
    (lm_head): LmHeadLinearAllreduce()
  )
)
```
Now we get more useful info:
```
InferenceEngine(
  (module): LlamaForCausalLM(
    (model): LlamaModel(
      (embed_tokens): Embedding(32000, 4096)
      (layers): ModuleList(
        (0-31): 32 x LlamaDecoderLayer(
          (self_attn): LlamaSdpaAttention(
            (q_proj): LinearLayer(in_features=4096, out_features=2048, bias=False, dtype=torch.bfloat16)
            (k_proj): LinearLayer(in_features=4096, out_features=2048, bias=False, dtype=torch.bfloat16)
            (v_proj): LinearLayer(in_features=4096, out_features=2048, bias=False, dtype=torch.bfloat16)
            (o_proj): LinearAllreduce(in_features=2048, out_features=4096, bias=False, dtype=torch.bfloat16)
            (rotary_emb): LlamaRotaryEmbedding()
          )
          (mlp): LlamaMLP(
            (gate_proj): LinearLayer(in_features=4096, out_features=5504, bias=False, dtype=torch.bfloat16)
            (up_proj): LinearLayer(in_features=4096, out_features=5504, bias=False, dtype=torch.bfloat16)
            (down_proj): LinearAllreduce(in_features=5504, out_features=4096, bias=False, dtype=torch.bfloat16)
            (act_fn): SiLU()
          )
          (input_layernorm): LlamaRMSNorm((4096,), eps=1e-05)
          (post_attention_layernorm): LlamaRMSNorm((4096,), eps=1e-05)
        )
      )
      (norm): LlamaRMSNorm((4096,), eps=1e-05)
      (rotary_emb): LlamaRotaryEmbedding()
    )
    (lm_head): LmHeadLinearAllreduce(in_features=2048, out_features=32000, bias=False, dtype=torch.bfloat16)
  )
)
```

Signed-off-by: gyou2021 <[email protected]>
xylian86 and others added 19 commits February 18, 2025 08:46
…6974)

As discussed in
[PR-6918](deepspeedai#6918), padding can
occur on multiple ranks with large DP degrees.

For example, with:
- Flattened tensor size: 266240
- DP degree: 768
- Alignment: 1536
- Required padding: 1024 (1536 * 174 - 266240)
- Per-rank partition size: 348 (1536 * 174 / 768)
- The padding occurs on last three ranks.

This PR removes the single-rank padding assumption for more general
cases.

---------

Co-authored-by: Sam Foreman <[email protected]>
Co-authored-by: Logan Adams <[email protected]>
Co-authored-by: Olatunji Ruwase <[email protected]>
Signed-off-by: gyou2021 <[email protected]>
…deepspeedai#6967)

- Issues with nv-sd updates, will follow up with a subsequent PR

Signed-off-by: gyou2021 <[email protected]>
NVIDIA Blackwell GPU generation has number 10. The SM code and
architecture should be `100`, but the current code generates `1.`,
because it expects a 2 characters string.

This change modifies the logic to consider it as a string that contains
a `.`, hence splits the string and uses the array of strings.

Signed-off-by: Fabien Dupont <[email protected]>
Signed-off-by: gyou2021 <[email protected]>
Signed-off-by: Olatunji Ruwase <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Signed-off-by: Fabien Dupont <[email protected]>
Co-authored-by: Fabien Dupont <[email protected]>
Signed-off-by: gyou2021 <[email protected]>
Signed-off-by: gyou2021 <[email protected]>
Signed-off-by: gyou2021 <[email protected]>
)

1. update intel oneAPI basekit to 2025.0
2. update torch/ipex/oneccl to 2.5

Signed-off-by: gyou2021 <[email protected]>
Same as [this PR](deepspeedai#6922).
[affeb88](deepspeedai@affeb88)
I noticed the CI updated the DCO check recently. Using the suggested
rebase method for sign-off would reintroduce many conflicts, so I opted
for a squash merge with sign-off instead. thanks: )

Signed-off-by: inkcherry <[email protected]>
Signed-off-by: gyou2021 <[email protected]>
…ai#6989)

Those files have code that gets run when importing them, so in systems
that doesn't support triton but have triton installed this causes
issues.

In general, I think it is better to import triton when it is installed
and supported.

Signed-off-by: Omar Elayan <[email protected]>
Signed-off-by: gyou2021 <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Co-authored-by: Stas Bekman <[email protected]>
Signed-off-by: gyou2021 <[email protected]>
Fix deepspeedai#7014
Avoid naming collision on `partition()`

---------

Signed-off-by: Olatunji Ruwase <[email protected]>
Signed-off-by: gyou2021 <[email protected]>
BUGFIX for Apple Silicon hostname
deepspeedai#6497

---------

Signed-off-by: Fabien Dupont <[email protected]>
Signed-off-by: Olatunji Ruwase <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Signed-off-by: inkcherry <[email protected]>
Signed-off-by: Roman Fitzjalen <[email protected]>
Co-authored-by: Logan Adams <[email protected]>
Co-authored-by: Fabien Dupont <[email protected]>
Co-authored-by: Olatunji Ruwase <[email protected]>
Co-authored-by: Liangliang Ma <[email protected]>
Co-authored-by: inkcherry <[email protected]>
Signed-off-by: gyou2021 <[email protected]>
- Update existing workflows that use cu121 to cu124. Note, this means
that where we download torch latest, we will now be getting torch 2.6
rather than the torch latest 2.5 provided with cuda 12.1.
- Note, nv-nightly is failing in master currently due to unrelated
errors, so this could be ignored in this PR (nv-nightly tested locally,
where it passes with 12.1 and it also passes with 12.4).

---------

Signed-off-by: Fabien Dupont <[email protected]>
Signed-off-by: Logan Adams <[email protected]>
Signed-off-by: Olatunji Ruwase <[email protected]>
Signed-off-by: inkcherry <[email protected]>
Signed-off-by: Omar Elayan <[email protected]>
Co-authored-by: Fabien Dupont <[email protected]>
Co-authored-by: Olatunji Ruwase <[email protected]>
Co-authored-by: Liangliang Ma <[email protected]>
Co-authored-by: inkcherry <[email protected]>
Co-authored-by: Omar Elayan <[email protected]>
Signed-off-by: gyou2021 <[email protected]>
This change is required to successfully build fp_quantizer extension on
ROCm.

---------

Co-authored-by: Logan Adams <[email protected]>
Signed-off-by: gyou2021 <[email protected]>
cc @tjruwase @jomayeri

---------

Co-authored-by: root <root@ftqtmec25000000.taxzvufipdhelhupulxcbvr15f.ux.internal.cloudapp.net>
Signed-off-by: gyou2021 <[email protected]>
)

Fix deepspeedai#7029
- Add Chinese blog for deepspeed windows
- Fix format in README.md

Co-authored-by: Logan Adams <[email protected]>
Signed-off-by: gyou2021 <[email protected]>
Adding compile support for AIO library on AMD GPUs.

---------

Co-authored-by: Olatunji Ruwase <[email protected]>
Co-authored-by: Logan Adams <[email protected]>
Signed-off-by: gyou2021 <[email protected]>
@gyou2021 gyou2021 force-pushed the configurable_autoTP branch from e7f6e5e to dbb3b09 Compare February 18, 2025 08:47
@delock
Copy link
Collaborator

delock commented Feb 20, 2025

Hi @loadams , is this PR under review? Thanks!

@loadams
Copy link
Collaborator

loadams commented Feb 20, 2025

Hi @loadams , is this PR under review? Thanks!

Hi @delock - sorry for the delay in this, we will work on getting this reviewed.

@loadams loadams removed request for arashb and awan-10 February 20, 2025 15:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.