File size: 1,028 Bytes
9c91ad3 3c3b439 9c91ad3 3c3b439 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
license: apache-2.0
datasets:
- turing-motors/LLaVA-Pretrain-JA
language:
- ja
---
# LLaVA-JP Model Card
This is a pretrained checkpoint, you can use it to instruct tune your multimodal models.
Check out the instructions [here](https://github.com/tosiyuki/LLaVA-JP)
## Model details
**Model type:**
LLaVA-JP is a vision-language model that can converse about input images.<br>
This model is an LVLM model trained using [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) as the image encoder and [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) as the text decoder. supports the input of 768 x 768 high resolution images by scaling_on_scales method.
## Training dataset
- [LLaVA-Pretrain-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Pretrain-JA)
## Acknowledgement
- [LLaVA](https://llava-vl.github.io/)
- [LLM-jp](https://llm-jp.nii.ac.jp/)
- [scaling_on_scales](https://github.com/bfshi/scaling_on_scales/tree/master)
## License
Apache-2.0 |