POLAR-7B-DPO-v1.02 / README.md
haes95's picture
Update README.md
fa23818 verified
|
raw
history blame
1.08 kB
---
library_name: transformers
license: apache-2.0
datasets:
- We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs
language:
- ko
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f3ee48b1a907c6aa6d8f06/nGbRfMQEfAW_aDwisKn9T.png)
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
POLAR is a Korean LLM developed by Plateer's AI-lab. It was inspired by Upstage's SOLAR. We will continue to evolve this model and hope to contribute to the Korean LLM ecosystem.
- **Developed by:** AI-Lab of Plateer(Woomun Jung, Eunsoo Ha, MinYoung Joo, Seongjun Son)
- **Model type:** Language model
- **Language(s) (NLP):** ko
- **License:** apache-2.0
- Parent Model: x2bee/POLAR-14B-v0.2
## Direct Use
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("x2bee/PLOAR-7B-DPO-v1.0")
model = AutoModelForCausalLM.from_pretrained("x2bee/PLOAR-7B-DPO-v1.0")
```