Practical course about Large Language Models.
-
Updated
Feb 21, 2025 - Jupyter Notebook
Practical course about Large Language Models.
This repository provides a Jupyter notebook demonstrating parameter-efficient fine-tuning (PEFT) with LoRA on Hugging Face models.
This repository contains a notebook for fine-tuning the meta-llama/Llama-3.2-3B-Instruct (or any other generative language models) model using Quantized LoRA (QLoRA) for sentiment classification on the Arabic HARD dataset.
This repository contains experiments on fine-tuning LLMs (Llama, Llama3.1, Gemma). It includes notebooks for model tuning, data preprocessing, and hyperparameter optimization to enhance model performance.
Add a description, image, and links to the peft-fine-tuning-llm topic page so that developers can more easily learn about it.
To associate your repository with the peft-fine-tuning-llm topic, visit your repo's landing page and select "manage topics."