abhijeet06793 commited on
Commit
ff2130a
1 Parent(s): c008885

Upload model

Browse files
Files changed (2) hide show
  1. README.md +1 -47
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -1,33 +1,6 @@
1
  ---
2
- license: apache-2.0
3
- base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
4
- tags:
5
- - generated_from_trainer
6
- model-index:
7
- - name: mistral-finetuned-samsum_5epochs
8
- results: []
9
  library_name: peft
10
  ---
11
-
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
-
15
- # mistral-finetuned-samsum_5epochs
16
-
17
- This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
18
-
19
- ## Model description
20
-
21
- More information needed
22
-
23
- ## Intended uses & limitations
24
-
25
- More information needed
26
-
27
- ## Training and evaluation data
28
-
29
- More information needed
30
-
31
  ## Training procedure
32
 
33
 
@@ -49,26 +22,7 @@ The following `bitsandbytes` quantization config was used during training:
49
  - pad_token_id: None
50
  - disable_exllama: True
51
  - max_input_length: None
52
- ### Training hyperparameters
53
-
54
- The following hyperparameters were used during training:
55
- - learning_rate: 0.0002
56
- - train_batch_size: 8
57
- - eval_batch_size: 8
58
- - seed: 42
59
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
- - lr_scheduler_type: cosine
61
- - training_steps: 250
62
- - mixed_precision_training: Native AMP
63
-
64
- ### Training results
65
-
66
-
67
-
68
  ### Framework versions
69
 
 
70
  - PEFT 0.5.0
71
- - Transformers 4.35.0.dev0
72
- - Pytorch 2.0.1+cu118
73
- - Datasets 2.14.5
74
- - Tokenizers 0.14.1
 
1
  ---
 
 
 
 
 
 
 
2
  library_name: peft
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ## Training procedure
5
 
6
 
 
22
  - pad_token_id: None
23
  - disable_exllama: True
24
  - max_input_length: None
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  ### Framework versions
26
 
27
+
28
  - PEFT 0.5.0
 
 
 
 
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a9ab88287529a13b416b84ce56703435d3123faf804639bbd5691db918ce97b7
3
  size 27308941
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0871dc8559ae1ee91618fa7889cfff8537ba128bf24b74cf875933226f4abc29
3
  size 27308941