Updated model with better training and evaluation. Test and val data included as pickle files. Older Legacy files were removed to avoid confusion.
b369926
tokenizer.json filter=lfs diff=lfs merge=lfs -text | |
model.safetensors filter=lfs diff=lfs merge=lfs -text | |
.git/lfs/objects/c8/35/c835b069d7b8cd02b400e6247b83bc1840ab12bb1628d5b2e03c8d728de75558 filter=lfs diff=lfs merge=lfs -text | |
.git/lfs/objects/1f/d7/1fd7a7515e64bfc2d81b06aaf253760ddd7d56e313b7e6902fb35d0df337cdb6 filter=lfs diff=lfs merge=lfs -text | |
.git/lfs/objects/52/2c/522cf8744686580c593ffefffe126844797290290aaffba412ab9eb574ca3ba9 filter=lfs diff=lfs merge=lfs -text | |
test_data.pickle filter=lfs diff=lfs merge=lfs -text | |
val_data.pickle filter=lfs diff=lfs merge=lfs -text | |
sentencepiece.bpe.model filter=lfs diff=lfs merge=lfs -text | |