nihaomur commited on
Commit
e20bf02
1 Parent(s): d668c9f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -13
README.md CHANGED
@@ -1,3 +1,11 @@
 
 
 
 
 
 
 
 
1
  This is not the original model I made, it's google's [Gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) and Quantized by [AutoAWQ](https://github.com/casper-hansen/AutoAWQ).
2
 
3
  I quantized it with 4-bit, your GPU VRAM should be at least 8G in order to garauntee it work perfectly.
@@ -8,19 +16,6 @@ Below is the original model card, hope you guys having fun with it.
8
 
9
 
10
 
11
- ---
12
- license: gemma
13
- library_name: transformers
14
- pipeline_tag: text-generation
15
- extra_gated_heading: Access Gemma on Hugging Face
16
- extra_gated_prompt: >-
17
- To access Gemma on Hugging Face, you’re required to review and agree to
18
- Google’s usage license. To do this, please ensure you’re logged in to Hugging
19
- Face and click below. Requests are processed immediately.
20
- extra_gated_button_content: Acknowledge license
21
- tags:
22
- - conversational
23
- ---
24
 
25
 
26
  # Gemma 2 model card
 
1
+ ---
2
+ license: gemma
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ extra_gated_button_content: Acknowledge license
6
+ tags:
7
+ - conversational
8
+ ---
9
  This is not the original model I made, it's google's [Gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) and Quantized by [AutoAWQ](https://github.com/casper-hansen/AutoAWQ).
10
 
11
  I quantized it with 4-bit, your GPU VRAM should be at least 8G in order to garauntee it work perfectly.
 
16
 
17
 
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
 
21
  # Gemma 2 model card