File size: 1,184 Bytes
d1c29cb c3b306f d1c29cb a93c100 c3b306f a93c100 c3b306f a93c100 c3b306f a93c100 c3b306f a93c100 c3b306f a93c100 c3b306f a93c100 c3b306f 4488328 c3b306f 4488328 c3b306f a93c100 4488328 a93c100 c3b306f a93c100 c3b306f a93c100 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
language: ko
license: apache-2.0
tags:
- korean
---
# Chat Model QLoRA Adapter
Fine-tuned QLoRA Adapter for model [OrionStarAI/Orion-14B-Base](https://huggingface.co/OrionStarAI/Orion-14B-Base)
Fine-tuned with Korean Sympathy Conversation dataset from AIHub
See more informations at [our GitHub](https://github.com/boostcampaitech6/level2-3-nlp-finalproject-nlp-09)
## Datasets
- [๊ณต๊ฐํ ๋ํ](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=71305)
## Quick Tour
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"CurtisJeon/OrionStarAI-Orion-14B-Base-4bit",
low_cpu_mem_usage=True,
trust_remote_code=True,
device_map="auto",
)
model.config.use_cache = True
model.load_adapter("m2af/OrionStarAI-Orion-14B-Base-adapter", "loaded")
model.set_adapter("loaded")
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
# Generate Sample
outputs = model.generate(**tokenizer("์๋
ํ์ธ์, ๋ฐ๊ฐ์ต๋๋ค.", return_tensors="pt"))
print(outputs)
```
|