Wizard_7B_Squad / README.md
gmongaras's picture
Update README.md
5ab1371
|
raw
history blame
263 Bytes
metadata
license: openrail

Model from: https://huggingface.co/TheBloke/wizardLM-7B-HF/tree/main

Trained on: https://huggingface.co/datasets/squad

For about 4500 steps (1 epoch) with a batch size of 8, 2 accumulation steps, and using LoRA adapters on all layers.