File size: 1,696 Bytes
fc419e7 a601499 fc419e7 a601499 4d43f3f a601499 08df47a acab117 08df47a 961fc7e 08df47a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
license: cc-by-nc-sa-4.0
datasets:
- nlpai-lab/kullm-v2
language:
- ko
pipeline_tag: text-generation
---
κ΅μ‘μ©μΌλ‘ νμ΅ ν κ°λ¨ν instruction fine-tuning λͺ¨λΈ (updated 2023/08/06)
- Pretrained model: skt/kogpt2-base-v2 (https://github.com/SKT-AI/KoGPT2)
- Training data: kullm-v2(https://huggingface.co/datasets/nlpai-lab/kullm-v2)
```python
from transformers import AutoModelForCausalLM
from transformers import PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast.from_pretrained("hyunjae/skt-kogpt2-kullm-v2",
bos_token='</s>', eos_token='</s>', unk_token='<unk>',
pad_token='<pad>', mask_token='<mask>', padding_side="right", model_max_length=512)
model = AutoModelForCausalLM.from_pretrained('hyunjae/skt-kogpt2-kullm-v2').to('cuda')
PROMPT= "### system:μ¬μ©μμ μ§λ¬Έμ λ§λ μ μ ν μλ΅μ μμ±νμΈμ.\n### μ¬μ©μ:{instruction}\n### μλ΅:"
text = PROMPT.format_map({'instruction':"μλ
? λκ° ν μ μλκ² λμΌ?"})
input_ids = tokenizer.encode(text, return_tensors='pt').to(model.device)
gen_ids = model.generate(input_ids,
repetition_penalty=2.0,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
bos_token_id=tokenizer.bos_token_id,
num_beams=4,
no_repeat_ngram_size=4,
max_new_tokens=128,
do_sample=True,
top_k=50)
generated = tokenizer.decode(gen_ids[0])
print(generated)
``` |