Suparious commited on
Commit
7bde561
1 Parent(s): 2498168

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -0
README.md CHANGED
@@ -1,4 +1,7 @@
1
  ---
 
 
 
2
  library_name: transformers
3
  tags:
4
  - 4-bit
@@ -6,8 +9,15 @@ tags:
6
  - text-generation
7
  - autotrain_compatible
8
  - endpoints_compatible
 
 
 
 
 
 
9
  pipeline_tag: text-generation
10
  inference: false
 
11
  quantized_by: Suparious
12
  ---
13
  # rombodawg/Llama-3-8B-Instruct-Coder AWQ
@@ -15,7 +25,13 @@ quantized_by: Suparious
15
  - Model creator: [rombodawg](https://huggingface.co/rombodawg)
16
  - Original model: [Llama-3-8B-Instruct-Coder](https://huggingface.co/rombodawg/Llama-3-8B-Instruct-Coder)
17
 
 
18
 
 
 
 
 
 
19
 
20
  ## How to use
21
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
  library_name: transformers
6
  tags:
7
  - 4-bit
 
9
  - text-generation
10
  - autotrain_compatible
11
  - endpoints_compatible
12
+ - text-generation-inference
13
+ - transformers
14
+ - unsloth
15
+ - llama
16
+ - trl
17
+ - sft
18
  pipeline_tag: text-generation
19
  inference: false
20
+ base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
21
  quantized_by: Suparious
22
  ---
23
  # rombodawg/Llama-3-8B-Instruct-Coder AWQ
 
25
  - Model creator: [rombodawg](https://huggingface.co/rombodawg)
26
  - Original model: [Llama-3-8B-Instruct-Coder](https://huggingface.co/rombodawg/Llama-3-8B-Instruct-Coder)
27
 
28
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/0O4cIuv3wNbY68-FP7tak.jpeg)
29
 
30
+ ## Model Summary
31
+
32
+ This model is llama-3-8b-instruct from Meta (uploaded by unsloth) trained on the full 65k Codefeedback dataset + the additional 150k Code Feedback Filtered Instruction dataset combined. You can find that dataset linked below. This AI model was trained with the new Qalore method developed by my good friend on Discord and fellow Replete-AI worker walmartbag.
33
+
34
+ The Qalore method uses Qlora training along with the methods from Galore for additional reductions in VRAM allowing for llama-3-8b to be loaded on 14.5 GB of VRAM. This allowed this training to be completed on an RTX A4000 16GB in 130 hours for less than $20.
35
 
36
  ## How to use
37