Practical course about Large Language Models.
-
Updated
Feb 21, 2025 - Jupyter Notebook
Practical course about Large Language Models.
[SIGIR'24] The official implementation code of MOELoRA.
Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT
Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
CRE-LLM: A Domain-Specific Chinese Relation Extraction Framework with Fine-tuned Large Language Model
memory-efficient fine-tuning; support 24G GPU memory fine-tuning 7B
Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?
Official repository of my book "A Hands-On Guide to Fine-Tuning LLMs with PyTorch and Hugging Face"
High Quality Image Generation Model - Powered with NVIDIA A100
A Python library for efficient and flexible cycle-consistency training of transformer models via iteratie back-translation. Memory and compute efficient techniques such as PEFT adapter switching allow for 7.5x larger models to be trained on the same hardware.
AI Community Tutorial, including: LoRA/Qlora LLM fine-tuning, Training GPT-2 from scratch, Generative Model Architecture, Content safety and control implementation, Model distillation techniques, Dreambooth techniques, Transfer learning, etc for practice with real project!
Mistral and Mixtral (MoE) from scratch
Fine-tune StarCoder2-3b for SQL tasks on limited resources with LORA. LORA reduces model size for faster training on smaller datasets. StarCoder2 is a family of code generation models (3B, 7B, and 15B), trained on 600+ programming languages from The Stack v2 and some natural language text such as Wikipedia, Arxiv, and GitHub issues.
EDoRA: Efficient Weight-Decomposed Low-Rank Adaptation via Singular Value Decomposition
PEFT is a wonderful tool that enables training a very large model in a low resource environment. Quantization and PEFT will enable widespread adoption of LLM.
NTU Deep Learning for Computer Vision 2023 course
Finetuning Large Language Models
In this repo I will share different topics on anything I want to know in nlp and llms
Add a description, image, and links to the peft-fine-tuning-llm topic page so that developers can more easily learn about it.
To associate your repository with the peft-fine-tuning-llm topic, visit your repo's landing page and select "manage topics."