Edit model card

馃攭 馃憠馃徎 Nova versi贸 del model, v0.5 aqu铆: /xaviviro/FLAMA-0.5-3B

FLAMA: Model 3B ChatML en Catal脿. Versi贸 0.1

FLAMA

FLAMA 茅s el primer model petit 3B en catal脿. 脡s el resultat de finetunejar el model open_llama_3b_v2 amb les instruccions d'OpenAssistant v1 tradu茂des autom脿ticament al catal脿 amb recursos de Helsinki-NLP i tractades en format ChatML.

Prompt Template

FLAMA usa ChatML com a prompt template:

<|im_start|>user
Qui va ser Isaac Newton?<|im_end|>
<|im_start|>assistant\n

Built with Axolotl

Refer猫ncies

@software{xaviviro2023flama,
  author = {xaviviro},
  title = {FLAMA: Model 3B ChatML en Catal脿. Versi贸 0.1},
  month = December,
  year = 2023,
  url = {https://huggingface.co/xaviviro/FLAMA-0.1-3B}
}
@software{openlm2023openllama,
  author = {Geng, Xinyang and Liu, Hao},
  title = {OpenLLaMA: An Open Reproduction of LLaMA},
  month = May,
  year = 2023,
  url = {https://github.com/openlm-research/open_llama}
}
@software{together2023redpajama,
  author = {Together Computer},
  title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
  month = April,
  year = 2023,
  url = {https://github.com/togethercomputer/RedPajama-Data}
}
@article{touvron2023llama,
  title={Llama: Open and efficient foundation language models},
  author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
  journal={arXiv preprint arXiv:2302.13971},
  year={2023}
}
Downloads last month
253
GGUF
Model size
3.43B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for xaviviro/FLAMA-0.1-3B-GGUF

Quantized
(1)
this model