Suparious commited on
Commit
2866e90
1 Parent(s): 5acf9a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  library_name: transformers
3
  tags:
4
  - 4-bit
@@ -6,8 +7,23 @@ tags:
6
  - text-generation
7
  - autotrain_compatible
8
  - endpoints_compatible
 
 
 
 
 
 
 
 
9
  pipeline_tag: text-generation
 
 
 
10
  inference: false
 
 
 
 
11
  quantized_by: Suparious
12
  ---
13
  # MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3 AWQ
@@ -15,7 +31,11 @@ quantized_by: Suparious
15
  - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
16
  - Original model: [Llama-3-8B-Instruct-DPO-v0.3](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3)
17
 
 
18
 
 
 
 
19
 
20
  ## How to use
21
 
 
1
  ---
2
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
3
  library_name: transformers
4
  tags:
5
  - 4-bit
 
7
  - text-generation
8
  - autotrain_compatible
9
  - endpoints_compatible
10
+ - axolotl
11
+ - finetune
12
+ - dpo
13
+ - facebook
14
+ - meta
15
+ - pytorch
16
+ - llama
17
+ - llama-3
18
  pipeline_tag: text-generation
19
+ license: llama3
20
+ license_name: llama3
21
+ license_link: LICENSE
22
  inference: false
23
+ model_creator: MaziyarPanahi
24
+ model_name: Llama-3-8B-Instruct-DPO-v0.3
25
+ datasets:
26
+ - Intel/orca_dpo_pairs
27
  quantized_by: Suparious
28
  ---
29
  # MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3 AWQ
 
31
  - Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
32
  - Original model: [Llama-3-8B-Instruct-DPO-v0.3](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3)
33
 
34
+ <img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
35
 
36
+ ## Model Summary
37
+
38
+ This model is a fine-tune (DPO) of `meta-llama/Meta-Llama-3-8B-Instruct` model. I have used `rope_theta` to extend the context length up to 32K safely.
39
 
40
  ## How to use
41