File size: 1,083 Bytes
8e0a653
 
8ad5ef4
 
 
 
 
75326aa
8e0a653
 
 
 
 
 
 
 
 
 
52e8f7b
8e0a653
 
52e8f7b
8e0a653
52e8f7b
 
 
 
 
 
 
 
8e0a653
 
52e8f7b
 
 
8e0a653
a1bd96e
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
library_name: transformers
license: apache-2.0
datasets:
- We-Want-GPU/Yi-Ko-DPO-Orca-DPO-Pairs
language:
- ko
pipeline_tag: text-generation
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->



## Model Details

![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f3ee48b1a907c6aa6d8f06/nGbRfMQEfAW_aDwisKn9T.png)


## Model Description

<!-- Provide a longer summary of what this model is/does. -->
POLAR is a Korean LLM developed by Plateer's AI-lab. It was inspired by Upstage's SOLAR. We will continue to evolve this model and hope to contribute to the Korean LLM ecosystem.

- **Developed by:** AI-Lab of Plateer(Woomun Jung, Eunsoo Ha, MinYoung Joo, Seongjun Son)
- **Model type:** Language model
- **Language(s) (NLP):** ko
- **License:** apache-2.0
- Parent Model: x2bee/POLAR-14B-v0.2


## Direct Use
```
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("x2bee/PLOAR-7B-DPO-v1.0")
model = AutoModelForCausalLM.from_pretrained("x2bee/PLOAR-7B-DPO-v1.0")
```