CurtisJeon's picture
Update README.md
4488328 verified
---
language: ko
license: apache-2.0
tags:
- korean
---
# Chat Model QLoRA Adapter
Fine-tuned QLoRA Adapter for model [OrionStarAI/Orion-14B-Base](https://huggingface.co/OrionStarAI/Orion-14B-Base)
Fine-tuned with Korean Sympathy Conversation dataset from AIHub
See more informations at [our GitHub](https://github.com/boostcampaitech6/level2-3-nlp-finalproject-nlp-09)
## Datasets
- [κ³΅κ°ν˜• λŒ€ν™”](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=71305)
## Quick Tour
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"CurtisJeon/OrionStarAI-Orion-14B-Base-4bit",
low_cpu_mem_usage=True,
trust_remote_code=True,
device_map="auto",
)
model.config.use_cache = True
model.load_adapter("m2af/OrionStarAI-Orion-14B-Base-adapter", "loaded")
model.set_adapter("loaded")
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
# Generate Sample
outputs = model.generate(**tokenizer("μ•ˆλ…•ν•˜μ„Έμš”, λ°˜κ°‘μŠ΅λ‹ˆλ‹€.", return_tensors="pt"))
print(outputs)
```