Aeala's picture
Update README.md
081c115
|
raw
history blame
641 Bytes
metadata
datasets:
  - gozfarb/ShareGPT_Vicuna_unfiltered

LoRA Info:

Please note that this is a highly experimental LoRA model. It may do some good stuff, it might do some undesirable stuff. Training is basically done now. Feel free to try it!~

Important Note: While this is trained on a cleaned ShareGPT dataset like Vicuna used, this was trained in the Alpaca format, so prompting should be something like:

### Instruction:

<prompt> (without the <>)

### Response:

Current upload: checkpoint of step 1200 in training.

Benchmarks

wikitext2: Coming soon...

ptb-new: Coming soon...

c4-new: Coming soon...