|
--- |
|
library_name: peft |
|
datasets: |
|
- timdettmers/openassistant-guanaco |
|
pipeline_tag: conversational |
|
base_model: internlm/internlm-chat-7b |
|
--- |
|
|
|
<div align="center"> |
|
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/> |
|
|
|
|
|
[![Generic badge](https://img.shields.io/badge/GitHub-%20XTuner-black.svg)](https://github.com/InternLM/xtuner) |
|
|
|
|
|
</div> |
|
|
|
## Model |
|
|
|
internlm-chat-7b-qlora-oasst1 is fine-tuned from [InternLM-Chat-7B](https://huggingface.co/internlm/internlm-chat-7b) with [openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset by [XTuner](https://github.com/InternLM/xtuner). |
|
|
|
|
|
## Quickstart |
|
|
|
### Usage with XTuner CLI |
|
|
|
#### Installation |
|
|
|
```shell |
|
pip install xtuner |
|
``` |
|
|
|
#### Chat |
|
|
|
```shell |
|
xtuner chat internlm/internlm-chat-7b --adapter xtuner/internlm-chat-7b-qlora-oasst1 --prompt-template internlm_chat |
|
``` |
|
|
|
#### Fine-tune |
|
|
|
Use the following command to quickly reproduce the fine-tuning results. |
|
|
|
```shell |
|
xtuner train internlm_chat_7b_qlora_oasst1_e3 |
|
``` |
|
|