Instruction-tuned LLaMA (Alpaca-GPT4)
Fine-tune LLaMA-7B on the alpaca dataset.
The main training scripts are from stanford-alpaca repo, while the data is from GPT-4-LLM repo, with the default training hyper-parameters.
Please refer to this page for more details.
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.