YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
This is a fine-tuning of the LLaMa7B model in the style of the Alpaca dataset and setting but using LoRa.
For details of the data and hyper params - https://crfm.stanford.edu/2023/03/13/alpaca.html
This repo only contains the LoRa weights and not the original LLaMa weights which are research only.