|
--- |
|
language: |
|
- ko |
|
|
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
license: cc-by-nc-4.0 |
|
--- |
|
|
|
# **Synatra-7B-v0.3-RPπ§** |
|
![Synatra-7B-v0.3-RP](./Synatra.png) |
|
|
|
## Support Me |
|
μλνΈλΌλ κ°μΈ νλ‘μ νΈλ‘, 1μΈμ μμμΌλ‘ κ°λ°λκ³ μμ΅λλ€. λͺ¨λΈμ΄ λ§μμ λμ
¨λ€λ©΄ μ½κ°μ μ°κ΅¬λΉ μ§μμ μ΄λ¨κΉμ? |
|
[<img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy me a Coffee" width="217" height="50">](https://www.buymeacoffee.com/mwell) |
|
|
|
Wanna be a sponser? Contact me on Telegram **AlzarTakkarsen** |
|
|
|
# **License** |
|
|
|
This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only. |
|
The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. |
|
The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me. |
|
|
|
# **Model Details** |
|
**Base Model** |
|
[mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) |
|
|
|
**Trained On** |
|
A6000 48GB * 8 |
|
|
|
**Instruction format** |
|
|
|
It follows [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) format. |
|
|
|
**TODO** |
|
|
|
- ~~``RP κΈ°λ° νλ λͺ¨λΈ μ μ``~~ β
|
|
- ~~``λ°μ΄ν°μ
μ μ ``~~ β
|
|
- μΈμ΄ μ΄ν΄λ₯λ ₯ κ°μ |
|
- ~~``μμ 보μ``~~ β
|
|
- ν ν¬λμ΄μ λ³κ²½ |
|
|
|
|
|
# **Model Benchmark** |
|
|
|
## Ko-LLM-Leaderboard |
|
|
|
On Benchmarking... |
|
|
|
# **Implementation Code** |
|
|
|
Since, chat_template already contains insturction format above. |
|
You can use the code below. |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
device = "cuda" # the device to load the model onto |
|
|
|
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-7B-v0.3-RP") |
|
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-7B-v0.3-RP") |
|
|
|
messages = [ |
|
{"role": "user", "content": "λ°λλλ μλ νμμμ΄μΌ?"}, |
|
] |
|
|
|
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") |
|
|
|
model_inputs = encodeds.to(device) |
|
model.to(device) |
|
|
|
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) |
|
decoded = tokenizer.batch_decode(generated_ids) |
|
print(decoded[0]) |
|
``` |
|
|
|
# Why It's benchmark score is lower than preview version? |
|
|
|
**Apparently**, Preview model uses Alpaca Style prompt which has no pre-fix. But ChatML do. |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__Synatra-7B-v0.3-RP) |
|
|
|
| Metric | Value | |
|
|-----------------------|---------------------------| |
|
| Avg. | 57.38 | |
|
| ARC (25-shot) | 62.2 | |
|
| HellaSwag (10-shot) | 82.29 | |
|
| MMLU (5-shot) | 60.8 | |
|
| TruthfulQA (0-shot) | 52.64 | |
|
| Winogrande (5-shot) | 76.48 | |
|
| GSM8K (5-shot) | 21.15 | |
|
| DROP (3-shot) | 46.06 | |
|
|