|
--- |
|
language: |
|
- ko |
|
- en |
|
license: apache-2.0 |
|
tags: |
|
- text-generation |
|
- qwen2.5 |
|
- korean |
|
- instruct |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
## π Notice |
|
- β
Original model is [beomi/Qwen2.5-7B-Instruct-kowiki-qa](https://huggingface.co/beomi/Qwen2.5-7B-Instruct-kowiki-qa) |
|
- β
Quantized by [teddylee777](https://huggingface.co/teddylee777) by using [llama.cpp](https://github.com/ggerganov/llama.cpp) |
|
|
|
## π¬ Template |
|
|
|
``` |
|
FROM Qwen2.5-7B-Instruct-kowiki-qa-Q8_0.gguf |
|
|
|
TEMPLATE """{{- if .System }} |
|
<|im_start|>system |
|
{{ .System }} |
|
<|im_end|> |
|
{{- end }} |
|
<|im_start|>user |
|
{{ .Prompt }} |
|
<|im_end|> |
|
<|im_start|>assistant |
|
""" |
|
|
|
SYSTEM """You are Qwen, created by Alibaba Cloud. You are a helpful assistant. λͺ¨λ λλ΅μ νκ΅μ΄λ‘ ν΄μ£ΌμΈμ.""" |
|
|
|
PARAMETER temperature 0 |
|
PARAMETER num_ctx 128000 |
|
PARAMETER stop <|im_start|> |
|
PARAMETER stop <|im_end|> |
|
``` |
|
|
|
## π§βπ» Helpful Contents |
|
- β
[How to load HuggingFace GGUF into LM Studio](https://youtu.be/bANQk--Maxs) |
|
- β
[How to test llama3 by using Ollama](https://youtu.be/12CuUQIPdM4) |
|
- π°π· [LangChain Tutorial in Korean](https://wikidocs.net/book/14314) |
|
- π₯ Please subscribe and support on [YouTube](https://www.youtube.com/@teddynote) |
|
|