Skip to content

Latest commit

 

History

History
11 lines (6 loc) · 674 Bytes

README.md

File metadata and controls

11 lines (6 loc) · 674 Bytes

FineTuning-Llama2-with-QLora

Setup

  • For our experiment we will need accelerate, peft, transformers, datasets and TRL to leverage the recent SFTTrainer. We will use bitsandbytes to quantize the base model into 4bit. We will also install einops as it is a requirement to load Falcon models.