PEFT
Persian
English
iamshnoo commited on
Commit
ea9105e
1 Parent(s): 189c679

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -1,6 +1,19 @@
1
  ---
2
  library_name: peft
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
4
  ## Training procedure
5
 
6
 
@@ -17,4 +30,4 @@ The following `bitsandbytes` quantization config was used during training:
17
  ### Framework versions
18
 
19
 
20
- - PEFT 0.4.0
 
1
  ---
2
  library_name: peft
3
+ license: cc-by-4.0
4
+ datasets:
5
+ - iamshnoo/alpaca-cleaned-persian
6
+ language:
7
+ - fa
8
+ - en
9
+ metrics:
10
+ - accuracy
11
  ---
12
+
13
+ This represents the PEFT weights only. The base model is LLaMA 2. Instruction finetuning was done using 4 bit QLoRA on a single A100 GPU with the PEFT config as given below. The dataset used for this instruction finetuning process is a translated version of the cleaned alpaca dataset (translated using NLLB-1.3B).
14
+
15
+ Do note that this model might have inferior performance on some language specific tasks compared to full finetuning or a different base model trained with more language specific data.
16
+
17
  ## Training procedure
18
 
19
 
 
30
  ### Framework versions
31
 
32
 
33
+ - PEFT 0.4.0