Skip to content

PrachiJainxD/Fine-Tuned-BERT-with-PEFT-and-LoRA

Repository files navigation

🎬 Fine Tuned BERT with PEFT and LoRA

This project fine-tunes a BERT-based model for binary sentiment classification on the IMDB movie reviews dataset using Parameter-Efficient Fine-Tuning (PEFT) and Low-Rank Adaptation (LoRA) — enabling high performance with minimal computational overhead.


🧠 Introduction

The goal of this project is to adapt DistilBERT for sentiment analysis — classifying IMDB reviews as positive or negative.
Instead of fully fine-tuning the entire network, we use PEFT and LoRA to efficiently update only a small subset of parameters, achieving faster training and lower memory consumption.


🚀 Key Techniques

🔹 Parameter-Efficient Fine-Tuning (PEFT)

PEFT optimizes large language models by training only select parameters while keeping most of the model frozen.
This is particularly useful when:

  • You have limited computational resources.
  • You need to fine-tune large models quickly for new tasks.

🔹 Low-Rank Adaptation (LoRA)

LoRA is a specific PEFT method that injects trainable low-rank matrices into pre-trained model layers.
These low-rank updates are lightweight but powerful, allowing efficient adaptation without retraining the entire model.

LoRA Configuration:

Parameter Value
r (rank) 4
lora_alpha 32
lora_dropout 0.01
target_modules ['q_lin']

Together, PEFT + LoRA enable faster convergence, lower memory footprint, and high accuracy even with small batch sizes.


⚙️ Model Training Pipeline

1️⃣ Data Preparation

  • Load the IMDB dataset.
  • Randomly sample training and validation subsets.
  • Tokenize text using the BERT tokenizer.

2️⃣ Model Initialization

  • Base model: distilbert-base-uncased
  • Task: Sequence Classification with labels Positive and Negative.

3️⃣ Training Configuration

Hyperparameter Value
Learning Rate 1e-3
Batch Size 4
Epochs 1
Evaluation Strategy End of each epoch
Save Strategy End of each epoch

4️⃣ Training

Model fine-tuned using the 🤗 Transformers Trainer API, integrated with PEFT for parameter-efficient updates.


🧩 Setup Instructions

1️⃣ Install Dependencies

pip install datasets evaluate transformers[sentencepiece]
pip install accelerate -U
pip install peft

2️⃣ Run the Script python

python FineTunedBERTwithPEFT&LoRA.py

3️⃣ Evaluate the Model

Use the validation set to compute metrics such as accuracy, precision, recall, and F1-score.

📊 Expected Outcome

  • Improved accuracy on sentiment classification with reduced fine-tuned parameters
  • Lower GPU memory consumption and faster training time compared to full fine-tuning
  • Demonstrates practical integration of LoRA adapters into transformer-based architectures

🧾 References

About

Fine-Tuned-BERT-for-Movie-Review-Classification-with-PEFT-and-LoRA

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published