Update README.md
Browse files
README.md
CHANGED
@@ -86,6 +86,11 @@ Obviously, we encountered better inference quality for models with the highest b
|
|
86 |
|
87 |
Each model's memory footprint can be anticipated by the qunatization docs in either [Hugging Face](https://huggingface.co/docs/transformers/main/en/quantization/overview) or [llama.cpp](https://github.com/ggerganov/llama.cpp/tree/master/examples/quantize).
|
88 |
|
|
|
|
|
|
|
|
|
|
|
89 |
## Contact
|
90 |
*Feel free to contact us whenever you confront any problems :)*
|
91 |
|
|
|
86 |
|
87 |
Each model's memory footprint can be anticipated by the qunatization docs in either [Hugging Face](https://huggingface.co/docs/transformers/main/en/quantization/overview) or [llama.cpp](https://github.com/ggerganov/llama.cpp/tree/master/examples/quantize).
|
88 |
|
89 |
+
|
90 |
+
# Acknowledgments
|
91 |
+
- Research supported with Cloud TPUs from [Google's TensorFlow Research Cloud](https://sites.research.google/trc/about/) (TFRC). Thanks for providing access to the TFRC ❤️
|
92 |
+
- Thanks to the generous support from the Hugging Face team, it is possible to download models from their S3 storage 🤗
|
93 |
+
|
94 |
## Contact
|
95 |
*Feel free to contact us whenever you confront any problems :)*
|
96 |
|