|
--- |
|
base_model: microsoft/Phi-3-mini-4k-instruct |
|
license: mit |
|
model_creator: Toshihiko Aoki |
|
model_name: phi3-mini-4k-qlora-jmultiwoz-dolly-amenokaku-alpaca_jp_python-GGUF |
|
prompt_template: <|user|>\n{}<|end|>\n<|assistant|>\n |
|
datasets: |
|
- sakusakumura/databricks-dolly-15k-ja-scored |
|
- nu-dialogue/jmultiwoz |
|
- kunishou/amenokaku-code-instruct |
|
- HachiML/alpaca_jp_python |
|
language: |
|
- ja |
|
--- |
|
|
|
This repository contains a model trained (QLoRA-SFT) with the following data: |
|
- Base model: [Phi-3 mini 4k instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) |
|
- Training data: |
|
- sakusakumura/databricks-dolly-15k-ja-scored |
|
- nu-dialogue/jmultiwoz |
|
- kunishou/amenokaku-code-instruct |
|
- HachiML/alpaca_jp_python |