Fine Tune Llama-2-13b on a single GPU on custom data.

In this tutorial, we will walk through each step of fine-tuning Llama-2-13b model on a single GPU. I’ll be using a collab notebook but you can use your local machine, it just needs to have around 12 Gb of VRAM.

The required libraries can be installed by running this in your notebook.

!pip install -q transformers trl peft huggingface_hub datasets bitsandbytes accelerate

First login to your huggingface account.

from huggingface_hub import login
login("<your token here>")

Loading the tokenizer.

model_id = "meta-llama/Llama-2-13b-chat-hf"
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")

from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments, BitsAndBytesConfig

tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token

Now we will load the model in its quantised form, this reduces the memory requirements to fit the model, so it can run on a single GPU.

bnb_config = BitsAndBytesConfig(load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16,
    bnb_4bit_use_double_quant=False)

If you’ve a bit more GPU to play around, you can load the 8-bit model. Play around with this configuration based on your hardware specifications.

model = AutoModelForCausalLM.from_pretrained(model_id,  quantization_config=bnb_config, use_cache=False)

Now the below lines of code prepare the model for 4 or 8-bit training, otherwise, you get an error.

from peft import prepare_model_for_kbit_training, get_peft_model, LoraConfig, TaskType

model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)

Then you define your LoRA config, mainly there are two parameters that you play around with – rank and lora_alpha. For more details, you can read about the params here.

peft_config = LoraConfig(
    task_type=TaskType.CAUSAL_LM, 
    inference_mode=False, 
    r=64, 
    lora_alpha=32, 
    lora_dropout=0.1,
)
model = get_peft_model(model, peft_config)

Now the below cell is a helper function that shows how many trainable parameters are there.

def print_trainable_parameters(model):
    """
    Prints the number of trainable parameters in the model.
    """
    trainable_params = 0
    all_param = 0
    for _, param in model.named_parameters():
        all_param += param.numel()
        if param.requires_grad:
            trainable_params += param.numel()
    print(
        f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
    )

print_trainable_parameters(model)

>>> trainable params: 52428800 || all params: 6724408320 || trainable%: 0.7796790067620403

We can see with LoRA, there are very few parameters to train.

To prepare your data, you can have it in any form you want, as long as it is with datasets, you can pass a formatting function while training, which can combine all text part of the data.

Here you can change the training configurations, For LoRA you can start with a higher learning rate as the original weights are frozen, so you don’t have to worry about catastrophic forgetting. The arguments you want to play around with are per_device_train_batch_size and gradient_accumulation_steps as when you run out of memory then lower per_device_train_batch_size and increase gradient_accumulation_steps.

max_seq_length = 512

from transformers import TrainingArguments, EarlyStoppingCallback
from trl import SFTTrainer
output_dir = "./results"
optim = "paged_adamw_32bit"
training_args = TrainingArguments(
    output_dir=output_dir,
    overwrite_output_dir=True,
    optim=optim,
    learning_rate=1e-4,
    logging_steps=10,
    max_steps=300,
    warmup_ratio=0.3,
    per_device_train_batch_size=8,
    gradient_accumulation_steps=4,
    gradient_checkpointing=True,
    save_total_limit = 5,
    fp16=True
    
)

Here I’m writing an example of a formatting function. My data already had a text field which had all the text data.

def format_function(example):
    return example['text']

But in case you don’t have text field, you can have it so that the function returns all text as one.

Now we define the trainer.

from trl import SFTTrainer
peft_trainer = SFTTrainer(
    model=model,
    train_dataset=dataset,
    peft_config=peft_config,
    max_seq_length=max_seq_length,
    tokenizer=tokenizer,
    args=training_args,
    formatting_func=format_function)

peft_trainer.train()

Once the model has been trained, you can store is locally or push it to huggingface hub.

Hope this tutorial cleared any doubts you had around fine-tuning LLMs on a single GPU.

Comments

2 responses to “Fine Tune Llama-2-13b on a single GPU on custom data.”

  1.  Avatar
    Anonymous

    Thanks what was the training time?

    Like

  2.  Avatar
    Anonymous

    where is the custom data here ?

    Like

Leave a comment