Model Summary
bloom-7b1 finetuned on xP3 enhanced with Russian multitask data. Hence the same as bloomz-7b1, but with additional Russian finetuning data. 4b stands for 4 billion finetuning tokens (same as bloomz-7b1).
Citation
@article{yong2022bloom+,
title={BLOOM+ 1: Adding Language Support to BLOOM for Zero-Shot Prompting},
author={Yong, Zheng-Xin and Schoelkopf, Hailey and Muennighoff, Niklas and Aji, Alham Fikri and Adelani, David Ifeoluwa and Almubarak, Khalid and Bari, M Saiful and Sutawika, Lintang and Kasai, Jungo and Baruwa, Ahmed and others},
journal={arXiv preprint arXiv:2212.09535},
year={2022}
}
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Datasets used to train bs-la/bloomz-7b1-4b-xp3ru
Evaluation results
- Accuracy on XWinograd (ru)test set self-reported53.970
- Accuracy on XNLI (ru)validation set self-reported50.000
- Accuracy on XStoryCloze (ru)validation set self-reported79.090