Skip to content

Official Pytorch Implementation of Paper "DarwinLM: Evolutionary Structured Pruning of Large Language Models"

Notifications You must be signed in to change notification settings

IST-DASLab/DarwinLM

Repository files navigation

🚀 DarwinLM: Evolutionary Structured Pruning for Language Models

🌟 ArXiv Preprint

This repository contains the implementation of evolutionary structured pruning for language models, as introduced in our paper. DarwinLM builds upon an evolutionary search process, generating multiple offspring models in each generation through mutation, and selecting the fittest for survival.

We provide six model variants on huggingface:


⚙️ Installation

To set up the environment, ensure you have the necessary dependencies installed:

conda env create -f environment.yml
conda activate darwinlm

✂️ Database Preparation

Before running the searching steps, you need to generate a structured database by running:

# For llama-2-7B
bash scripts/ziplm_llama2-7B.sh

# For llama-3.1-8B
bash scripts/ziplm_llama3.1-8B.sh

# For Qwen2.5-14B-Ins
bash scripts/ziplm_qwen2.5-14B-instruct.sh

Note: Currently, you need to manually specify the number of columns removed for each compression step.

🚀 Evolutionary Search

To perform structured pruning, run the following scripts/struct_prune_search.sh:

# The example is for Llama-2-7B, you can set other models and set the path of generated database to COMPR_PATH 
bash scripts/struct_prune_search.sh

This will conduct a structured pruning search based on the defined configurations. After the searching, a .txt file will be generated. You can stitch the model with the database and .txt file.

🛠 Post-Training

After pruning, you can further fine-tune the model with Fineweb-edu dataset using the llm-foundry repository. You can check the parameter settings in our paper for replication.

📊 Evaluation

First of all, you should install the lm-evaluation-harness from their guidelines.

  • Option 1: After that, if you want to replicate the results in our paper, we provide the weights above, which are fully supported by transformers packages directly. You can simply run the script
# Simply modify MODEL_ID to different models
bash scripts/run_lmeval_hf.sh
  • Option 2: If you want to evaluate your searched structure, the evolutionary structured pruning search (evo_struct_prune_search.py) produces a configuration file that you need to pass as sparse_config_path. Also, pass the database path to database_path and run:
bash scripts/run_lmeval_config.sh

Note: Currently, transformers packages do not support heads pruning for Llama and Qwen models. Therefore, if you install the package from the official repo, you should set model_shrink=false, which will keep the pruned weights as 0 but not be actually removed. If you want the actual speed, you can install the transformers packages from my implementation from source.

📖 Citation

If you find this work useful, please cite our paper:

@article{tang2025darwinlm,
  title={DarwinLM: Evolutionary Structured Pruning of Large Language Models},
  author={Tang, Shengkun and Sieberling, Oliver and Kurtic, Eldar and Shen, Zhiqiang and Alistarh, Dan},
  journal={arXiv preprint arXiv:2502.07780},
  year={2025}
}

For any issues or questions, please open an issue or contact us directly. 🚀

About

Official Pytorch Implementation of Paper "DarwinLM: Evolutionary Structured Pruning of Large Language Models"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published