RishuD7 commited on
Commit
bdc78b2
1 Parent(s): 7ca68c7

RishuD7/new_test_v3

Browse files
Files changed (1) hide show
  1. README.md +80 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
4
+ tags:
5
+ - trl
6
+ - sft
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: new_test_gen
10
+ results: []
11
+ library_name: peft
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # new_test_gen
18
+
19
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.1867
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+
38
+ The following `bitsandbytes` quantization config was used during training:
39
+ - quant_method: bitsandbytes
40
+ - _load_in_8bit: True
41
+ - _load_in_4bit: False
42
+ - llm_int8_threshold: 6.0
43
+ - llm_int8_skip_modules: None
44
+ - llm_int8_enable_fp32_cpu_offload: False
45
+ - llm_int8_has_fp16_weight: False
46
+ - bnb_4bit_quant_type: fp4
47
+ - bnb_4bit_use_double_quant: False
48
+ - bnb_4bit_compute_dtype: float32
49
+ - bnb_4bit_quant_storage: uint8
50
+ - load_in_4bit: False
51
+ - load_in_8bit: True
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 5e-05
56
+ - train_batch_size: 16
57
+ - eval_batch_size: 8
58
+ - seed: 42
59
+ - gradient_accumulation_steps: 2
60
+ - total_train_batch_size: 32
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: linear
63
+ - num_epochs: 3
64
+
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss |
68
+ |:-------------:|:-----:|:----:|:---------------:|
69
+ | 0.1061 | 1.0 | 1192 | 0.1481 |
70
+ | 0.0781 | 2.0 | 2384 | 0.1709 |
71
+ | 0.0703 | 3.0 | 3576 | 0.1867 |
72
+
73
+
74
+ ### Framework versions
75
+
76
+ - PEFT 0.4.0
77
+ - Transformers 4.43.3
78
+ - Pytorch 2.3.1+cu121
79
+ - Datasets 2.13.0
80
+ - Tokenizers 0.19.1