File size: 4,017 Bytes
db5de10 53b7b60 db5de10 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
---
license: apache-2.0
language:
- zh
library_name: transformers
pipeline_tag: text-generation
widget:
- text: "弥漫性血管内凝血、充血性心力衰竭等并发症,应该怎样进行辅助检查和诊断?"
---
# Jimmy_Med
基模型进行了指令微调,提高了基模型在医疗领域的问答效果。
#On modelscope.cn
`dafei1288/Jimmy_Med`
# 底模
基于 `Langboat/bloom-800m-zh` 基础模型进行微调
# 数据集
基于 `本草[原名:华驼(HuaTuo)]` 的训练数据集
# A quick start
`pip install -r requirements.txt`
## 代码示例
```python
from transformers import pipeline,AutoTokenizer,AutoModelForCausalLM
from transformers import pipeline
import torch
retokenizer = AutoTokenizer.from_pretrained("model/langboat/bloom-800m-zh")
remodel = AutoModelForCausalLM.from_pretrained("bloomoutput/bloom-800m-zh-merged-med",
low_cpu_mem_usage=True,
torch_dtype=torch.half,
device_map="cuda")
repipe = pipeline("text-generation", model=remodel, tokenizer=retokenizer ,truncation=True)
ipt = "Human: {}\n{}".format("关节部位红肿疼痛,排尿困难,怎么办?", "").strip() + "\n\nAssistant: "
print(repipe(ipt, max_length=400, do_sample=True, ))
```
## 项目参与者
本项目由 [Jack 'dafei1288' Lee](https://github.com/dafei1288)
## 致谢
本项目参考了以下开源项目,在此对相关项目和研究开发人员表示感谢。
- 本草: https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese
- Bloom: https://huggingface.co/Langboat/bloom-800m-zh
## 免责声明
本项目相关资源仅供学术研究之用,严禁用于商业用途。使用涉及第三方代码的部分时,请严格遵循相应的开源协议。模型生成的内容受模型计算、随机性和量化精度损失等因素影响,本项目无法对其准确性作出保证。本项目数据集绝大部分由模型生成,即使符合某些医学事实,也不能被用作实际医学诊断的依据。对于模型输出的任何内容,本项目不承担任何法律责任,亦不对因使用相关资源和输出结果而可能产生的任何损失承担责任。
## Citation
如果您使用了本项目的数据或者代码,或是我们的工作对您有所帮助,请声明引用
首版技术报告: [Huatuo: Tuning llama model with chinese medical knowledge](https://arxiv.org/pdf/2304.06975)
```
@misc{wang2023huatuo,
title={HuaTuo: Tuning LLaMA Model with Chinese Medical Knowledge},
author={Haochun Wang and Chi Liu and Nuwa Xi and Zewen Qiang and Sendong Zhao and Bing Qin and Ting Liu},
year={2023},
eprint={2304.06975},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
知识微调:[Knowledge-tuning Large Language Models with Structured Medical Knowledge Bases for Reliable Response Generation in Chinese
](https://arxiv.org/pdf/2309.04175.pdf)
```
@misc{wang2023knowledgetuning,
title={Knowledge-tuning Large Language Models with Structured Medical Knowledge Bases for Reliable Response Generation in Chinese},
author={Haochun Wang and Sendong Zhao and Zewen Qiang and Zijian Li and Nuwa Xi and Yanrui Du and MuZhen Cai and Haoqiang Guo and Yuhan Chen and Haoming Xu and Bing Qin and Ting Liu},
year={2023},
eprint={2309.04175},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
医学文献知识获取:[The CALLA Dataset: Probing LLMs’ Interactive Knowledge Acquisition from Chinese Medical Literature](https://arxiv.org/pdf/2309.04198.pdf)
```
@misc{du2023calla,
title={The CALLA Dataset: Probing LLMs' Interactive Knowledge Acquisition from Chinese Medical Literature},
author={Yanrui Du and Sendong Zhao and Muzhen Cai and Jianyu Chen and Haochun Wang and Yuhan Chen and Haoqiang Guo and Bing Qin},
year={2023},
eprint={2309.04198},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |