IHaveNoClueAndIMustPost's picture
Update README.md
4e6fe10
|
raw
history blame contribute delete
No virus
741 Bytes
metadata
datasets:
  - togethercomputer/RedPajama-Data-1T-Sample
tags:
  - llama2
  - llama

A second model merge by chargoddard. A GGML conversion of the previous merge can be found here.
I have no idea what I'm doing so if something doesn't work as it should or not at all that's likely on me, not the models themselves.

Description copied from the original repo below.

Similar to llama2-22b, but with BLOCK_DIAGONAL=false in the merge and twice the fine-tuning tokens.

Again, not intended for direct use - meant as a base for further tuning and merging.