ShirinYamani commited on
Commit
54b2905
1 Parent(s): 897119f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -16
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- license: apache-2.0
3
  library_name: peft
4
  tags:
5
  - generated_from_trainer
@@ -7,6 +7,8 @@ base_model: mistralai/Mistral-7B-v0.1
7
  model-index:
8
  - name: mistral7b-fine-tuned-qlora
9
  results: []
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -14,21 +16,11 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # mistral7b-fine-tuned-qlora
16
 
17
- This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
18
 
19
  ## Model description
20
 
21
- More information needed
22
-
23
- ## Intended uses & limitations
24
-
25
- More information needed
26
-
27
- ## Training and evaluation data
28
-
29
- More information needed
30
-
31
- ## Training procedure
32
 
33
  ### Training hyperparameters
34
 
@@ -45,9 +37,6 @@ The following hyperparameters were used during training:
45
  - training_steps: 10
46
  - mixed_precision_training: Native AMP
47
 
48
- ### Training results
49
-
50
-
51
 
52
  ### Framework versions
53
 
 
1
  ---
2
+ license: mit
3
  library_name: peft
4
  tags:
5
  - generated_from_trainer
 
7
  model-index:
8
  - name: mistral7b-fine-tuned-qlora
9
  results: []
10
+ datasets:
11
+ - timdettmers/openassistant-guanaco
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
16
 
17
  # mistral7b-fine-tuned-qlora
18
 
19
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset.
20
 
21
  ## Model description
22
 
23
+ Mistral-7B fine-tuned using Qlora
 
 
 
 
 
 
 
 
 
 
24
 
25
  ### Training hyperparameters
26
 
 
37
  - training_steps: 10
38
  - mixed_precision_training: Native AMP
39
 
 
 
 
40
 
41
  ### Framework versions
42