maddes8cht
commited on
Commit
•
ce7c9a9
1
Parent(s):
a44b8ee
"Update README.md"
Browse files
README.md
CHANGED
@@ -26,7 +26,7 @@ Here's what you need to know:
|
|
26 |
|
27 |
**Original Falcon Models:** I am diligently working to provide updated quantized versions of the four original Falcon models to ensure their compatibility with the new llama.cpp versions. Please keep an eye on my Hugging Face Model pages for updates on the availability of these models. Promptly downloading them is essential to maintain compatibility with the latest llama.cpp releases.
|
28 |
|
29 |
-
**Derived Falcon Models:**
|
30 |
|
31 |
**Stay Informed:** Application software using llama.cpp libraries will follow soon. Keep an eye on the release schedules of your favorite software applications that rely on llama.cpp. They will likely provide instructions on how to integrate the new models.
|
32 |
|
@@ -39,8 +39,6 @@ Please understand that this change specifically affects Falcon and Starcoder mod
|
|
39 |
As a solo operator of this page, I'm doing my best to expedite the process, but please bear with me as this may take some time.
|
40 |
|
41 |
|
42 |
-
|
43 |
-
|
44 |
These are gguf quantized models of the riginal Falcon 40B Model by tiiuae.
|
45 |
Falcon is a foundational large language model coming in different sizes: 7b, 40b and 180b.
|
46 |
Sadly, as the Falcon 180b Models are note really free models, I do not provide quantized versions here.
|
|
|
26 |
|
27 |
**Original Falcon Models:** I am diligently working to provide updated quantized versions of the four original Falcon models to ensure their compatibility with the new llama.cpp versions. Please keep an eye on my Hugging Face Model pages for updates on the availability of these models. Promptly downloading them is essential to maintain compatibility with the latest llama.cpp releases.
|
28 |
|
29 |
+
**Derived Falcon Models:** Right now, the derived Falcon-Models cannot be re-converted without adjustments from the original model creators. So far, these models cannot be used in recent llama.cpp versions at all. ** Good news!** It's in the pipeline that the capability for quantizing even the older derived Falcon models will be incorporated soon. However, the exact timeline is beyond my control.
|
30 |
|
31 |
**Stay Informed:** Application software using llama.cpp libraries will follow soon. Keep an eye on the release schedules of your favorite software applications that rely on llama.cpp. They will likely provide instructions on how to integrate the new models.
|
32 |
|
|
|
39 |
As a solo operator of this page, I'm doing my best to expedite the process, but please bear with me as this may take some time.
|
40 |
|
41 |
|
|
|
|
|
42 |
These are gguf quantized models of the riginal Falcon 40B Model by tiiuae.
|
43 |
Falcon is a foundational large language model coming in different sizes: 7b, 40b and 180b.
|
44 |
Sadly, as the Falcon 180b Models are note really free models, I do not provide quantized versions here.
|