gemma-7b-zephyr-sft / README.md
tcapelle's picture
Upload GemmaForCausalLM
84c3604 verified
|
raw
history blame
No virus
1.22 kB
---
license: other
library_name: transformers
datasets:
- HuggingFaceH4/ultrachat_200k
base_model: google/gemma-7b
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/llm_surgery/gemma-zephyr)
# Gemma 7B Zephyr SFT
The [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) SFT recipe applied on top of Gemma 7B
## Model description
- **Model type:** A 8.5B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **Finetuned from model:** [google/gemma-7b](https://huggingface.co/google/gemma-7b)
## Recipe
We trained using the [alignment handbook recipe](https://github.com/huggingface/alignment-handbook/blob/main/scripts/run_sft.py) and logging to W&B
Visit the [W&B workspace here](https://wandb.ai/llm_surgery/gemma-zephyr?nw=nwusercapecape)
## License
This model has the same license as the [original Gemma model collection](https://ai.google.dev/gemma/terms)
## Compute provided by Lambda Labs - 8xA100 80GB node