Nicolas Iglesias
commited on
Commit
•
994ff20
1
Parent(s):
a1445cd
Update README.md
Browse files
README.md
CHANGED
@@ -73,6 +73,10 @@ print(answer)
|
|
73 |
|
74 |
# Inference
|
75 |
|
|
|
|
|
|
|
|
|
76 |
## CPU
|
77 |
|
78 |
CPU inference is available via [GGML model](https://huggingface.co/danderian/zenos-gpt-j-6B-alpaca-evol-4bit/resolve/main/ggml-f16-q4_0.bin)
|
@@ -82,10 +86,6 @@ CPU inference is available via [GGML model](https://huggingface.co/danderian/zen
|
|
82 |
- 4 cores
|
83 |
- 4GB RAM
|
84 |
|
85 |
-
## Online
|
86 |
-
|
87 |
-
Currently, the HuggingFace's Inference Tool UI doesn't properly load the model. However, you can use it with regular Python code as shown above once you meet the [requirements](#requirements).
|
88 |
-
|
89 |
# Acknowledgments
|
90 |
|
91 |
This model was developed by [Nicolás Iglesias](mailto:[email protected]) using the Hugging Face Transformers library.
|
|
|
73 |
|
74 |
# Inference
|
75 |
|
76 |
+
## Online
|
77 |
+
|
78 |
+
Currently, the HuggingFace's Inference Tool UI doesn't properly load the model. However, you can use it with regular Python code as shown above once you meet the [requirements](#requirements).
|
79 |
+
|
80 |
## CPU
|
81 |
|
82 |
CPU inference is available via [GGML model](https://huggingface.co/danderian/zenos-gpt-j-6B-alpaca-evol-4bit/resolve/main/ggml-f16-q4_0.bin)
|
|
|
86 |
- 4 cores
|
87 |
- 4GB RAM
|
88 |
|
|
|
|
|
|
|
|
|
89 |
# Acknowledgments
|
90 |
|
91 |
This model was developed by [Nicolás Iglesias](mailto:[email protected]) using the Hugging Face Transformers library.
|