Deploying with Text Generation Inference
#21
by
mariiaponom
- opened
Hi! I want to deploy one of the models using Text Generation Inference (https://github.com/huggingface/text-generation-inference#using-a-private-or-gated-model), but I can't use this repo directly cause it contains many bin files, therefore I thought about downloading one of them and putting into separate model repo. But seems like it doesn't really work, can somebody suggest something on this?