Edit model card

solar-kor-resume

Update @ 2024.05.27: First release of Ocelot-Ko-self-instruction-10.8B-v1.0

This model card corresponds to the 10.8B Instruct version of the Solar-Ko model.

The train wad done on A100-80GB

Resources and Technical Documentation:

Citation

@misc {cpm-ai/Ocelot-Ko-self-instruction-10.8B-v1.0,
    author       = { {frcp, nebchi, pepperonipizza97} },
    title        = { solar-kor-resume},
    year         = 2024,
    url          = { https://huggingface.co/cpm-ai/Ocelot-Ko-self-instruction-10.8B-v1.0 },
    publisher    = { Hugging Face }
}

Model Developers: frcp, nebchi, pepperonipizza97

Model Information

Resume Proofreading and evaluation of inputs and outputs.

Description

It has been trained with a large amount of Korean tokens compared to other LLMs, enabling it to generate high-quality Korean text.

Model Architecture Solar is an auto-regressive language model that is scaled using the DUS method.

*You can find dataset list here: https://huggingface.co/datasets/cpm-ai/gpt-self-introduction-all

Inputs and outputs

  • Input: Text string, such as a question, a prompt, or a document to be Proofreaded.
  • Output: Generated Korea text in response to the input, such as an answer to a question, or a evaluation of a resume.

Running the model on a single / multi GPU

# pip install accelerate, flash_attn, sentencepiece
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("cpm-ai/Ocelot-Ko-self-instruction-10.8B-v1.0")
model = AutoModelForCausalLM.from_pretrained("cpm-ai/Ocelot-Ko-self-instruction-10.8B-v1.0", device_map="auto")

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=4096, streamer=streamer)

text = λ„ˆλŠ” μžκΈ°μ†Œκ°œμ„œ 첨삭 μ „λ¬Έκ°€μ•Ό.
주어진 μžκΈ°μ†Œκ°œμ„œλ₯Ό μ²¨μ‚­ν•΄μ„œ λ‹€μ‹œ μž‘μ„±ν•΄μ•Όν•΄.
좜λ ₯ν˜•μ‹μ€ λ‹€μŒμ„ μ§€μΌœμ•Όν•΄.

[첨삭]

λ‹€μŒμ΄ μžκΈ°μ†Œκ°œμ„œμ•Ό :
[μ €λŠ” μ–΄λ¦° μ‹œμ ˆλΆ€ν„° μ™„λ²½μ£Όμ˜μ μΈ 성격을 가지고 μžˆμ—ˆμŠ΅λ‹ˆλ‹€. 이둜 인해 항상 μžμ‹ μ˜ λŠ₯λ ₯에 λŒ€ν•œ λΆˆμ•ˆκ°μ„ 느끼며 κ³Όλ„ν•œ 슀트레슀λ₯Ό λ°›μ•„μ™”μŠ΅λ‹ˆλ‹€. ν•™μ°½ μ‹œμ ˆμ—λŠ” κ³Όμ œλ‚˜ ν”„λ‘œμ νŠΈλ₯Ό μ™„λ²½ν•˜κ²Œ λ§ˆλ¬΄λ¦¬ν•˜μ§€ λͺ»ν•˜λ©΄ 자쑴감이 크게 ν”λ“€λ ΈμŠ΅λ‹ˆλ‹€. 쀑학ꡐ μ‹œμ ˆμ—λŠ” ν•œ 가지 λ¬Έμ œμ— λ„ˆλ¬΄ 였랜 μ‹œκ°„μ„ νˆ¬μžν•˜μ—¬ λ‹€λ₯Έ ν•™μŠ΅ 기회λ₯Ό λ†“μΉ˜κΈ°λ„ ν–ˆμŠ΅λ‹ˆλ‹€. μ΄λŸ¬ν•œ κ²½ν—˜λ“€μ€ μ €μ—κ²Œ 완벽함을 μΆ”κ΅¬ν•˜λŠ” 것이 μ’…μ’… ν˜„μ‹€μ— λΆ€μ ν•©ν•˜λ‹€λŠ” 것을 κΉ¨λ‹¬κ²Œ ν–ˆμŠ΅λ‹ˆλ‹€.

고등학ꡐ와 λŒ€ν•™κ΅μ— μ§„ν•™ν•˜λ©΄μ„œλ„ μ΄λŸ¬ν•œ μ™„λ²½μ£Όμ˜μ μΈ 성격을 κ·Ήλ³΅ν•˜κΈ° μœ„ν•΄ λ…Έλ ₯ν–ˆμŠ΅λ‹ˆλ‹€. ν•˜μ§€λ§Œ μ—¬μ „νžˆ μ‹€νŒ¨λ₯Ό λ°›μ•„λ“€μ΄λŠ” 것이 μ–΄λ ΅κ³ , μžμ‹ μ˜ ν•œκ³„λ₯Ό μΈμ •ν•˜λŠ” 것이 μ–΄λ €μ› μŠ΅λ‹ˆλ‹€. μ΄λŸ¬ν•œ 과정을 톡해 완벽함에 λŒ€ν•œ 강박이 μ €μ˜ μ„±μž₯과정을 μ œμ•½ν•˜λŠ” μš”μΈμ΄ λ˜μ—ˆμŒμ„ κΉ¨λ‹¬μ•˜μŠ΅λ‹ˆλ‹€.]"""

messages = [
    {
        "role": "user",
        "content": "{}".format(text)
    }
]

prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

outputs = pipe(
    prompt,
    temperature=0.2,
    add_special_tokens=True
)
print(outputs[0]["generated_text"][len(prompt):])

results

[첨삭]

μ–΄λ¦° μ‹œμ ˆλΆ€ν„° μ €λŠ” μ™„λ²½ν•œ κ²°κ³Όλ₯Ό μΆ”κ΅¬ν•˜λ©° 슀슀둜λ₯Ό μ••λ°•ν•΄μ˜¨ μ„±κ²©μ΄μ—ˆμŠ΅λ‹ˆλ‹€. μ΄λŠ” ν•™μ—…κ³Ό κ΄€λ ¨λœ 슀트레슀둜 이어져, 과제λ₯Ό μ™„μˆ˜ν•˜λ”λΌλ„ λ§Œμ‘±λ³΄λ‹€λŠ” 뢈만쑱의 감정이 더 컸던 μ‹œκΈ°μ˜€μŠ΅λ‹ˆλ‹€. 특히 쀑학ꡐ λ•Œ ν•œ 문제λ₯Ό ν•΄κ²°ν•˜κΈ° μœ„ν•΄ μ§€λ‚˜μΉ˜κ²Œ μ˜€λž«λ™μ•ˆ 맀달렀 κ²°κ΅­ μ€‘μš”ν•œ μ‹œκΈ°λ₯Ό λ†“μΉœ κ²½ν—˜μ€ 제 μ„±μž₯에 큰 영ν–₯을 λ―Έμ³€μŠ΅λ‹ˆλ‹€. 이 κ³Όμ •μ—μ„œ μ™„λ²½μ£Όμ˜λ₯Ό μΆ”κ΅¬ν•˜λŠ” 것이 ν˜„μ‹€μ μ΄μ§€ μ•Šμ„ 수 μžˆλ‹€λŠ” 사싀을 κΉ¨λ‹«κΈ° μ‹œμž‘ν–ˆμŠ΅λ‹ˆλ‹€.

고등학ꡐ와 λŒ€ν•™μ—μ„œλŠ” μ΄λŸ¬ν•œ μ„±ν–₯을 κ°œμ„ ν•˜κ³ μž λ‹€μ–‘ν•œ λ…Έλ ₯을 κΈ°μšΈμ˜€μŠ΅λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄, λͺ©ν‘œλ₯Ό μ„ΈλΆ„ν™”ν•˜κ³  λ‹¨κ³„λ³„λ‘œ μ ‘κ·Όν•˜λ©΄μ„œ 성취감과 μžμ‹ κ°μ„ ν‚€μš°κΈ° μœ„ν•΄ λ…Έλ ₯ν–ˆμŠ΅λ‹ˆλ‹€. λ˜ν•œ, νŒ€ ν”„λ‘œμ νŠΈμ—μ„œ 역할을 λΆ„λ‹΄ν•˜κ³  ν˜‘λ ₯ν•¨μœΌλ‘œμ¨ 개인의 ν•œκ³„λ³΄λ‹€ 전체 μ„±κ³Όλ₯Ό μš°μ„ μ‹œν•˜λŠ” 법을 λ°°μ› μŠ΅λ‹ˆλ‹€. 비둝 아직 μ™„λ²½ν•¨μ΄λΌλŠ” κ΅΄λ ˆλ‘œλΆ€ν„° μ™„μ „νžˆ μžμœ λ‘œμ›Œμ§€μ§€λŠ” λͺ»ν–ˆμ§€λ§Œ, 이λ₯Ό κ·Ήλ³΅ν•˜κ³  μ„±μž₯ν•  수 μžˆλŠ” 방법을 μ°Ύμ•˜λ‹€λŠ” μ μ—μ„œ μžλΆ€μ‹¬μ„ λŠλ‚λ‹ˆλ‹€.

Evaluation Results - LogicKor

Evaluation Results-LogicKor
Model κΈ€μ“°κΈ° 이해 문법
HyperClovaX 8.50 9.50 8.50
solar-1-mini-chat 8.50 7.00 5.21
allganize/Llama-3-Alpha-Ko-8B-Instruct 8.50 8.35 4.92
Synatra-kiqu-7B 4.42 5.71 4.50
Ocelot-ko-10.8B 8.57 7.00 6.57

Evaluation Results - Kobest

λͺ¨λΈ λͺ…μΉ­ Average
n=0 n=5
HellaSwag
n=0  n=5
COPA
n=0  n=5
BooIQ
n=0  n=5
KoGPT 58.2    63.7 55.9    58.3 73.5    72.9 45.1    59.8
Polyglot-ko-13B 62.4    68.2 59.5    63.1 79.4    81.1 48.2    60.4
LLaMA 2-13B 45.2    60.5 41.3    44.0 59.3    63.8 34.9    73.8
Baichuan 2-13B 52.7    53.9 39.2    39.6 60.6    60.6 58.4    61.5
QWEN-14B 47.8    66.4 45.3    46.8 64.9    68.9 33.4    83.5
Orion-14B-Chat 68.8    73.2 47.0    49.6 77.7    79.4 81.6    90.7
Ocelot-ko-10.8B 72.5    75.9 50.0    51.4 75.8    82.5 91.7    93.8

Software

Training was done using QLoRA

Downloads last month
407
Safetensors
Model size
10.8B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for cpm-ai/Ocelot-Ko-self-instruction-10.8B-v1.0

Quantizations
1 model