Safetensors
Chinese
English
llama
File size: 3,160 Bytes
2ec4621
 
 
 
f0a4de4
2ec4621
 
 
 
 
 
 
18a45cb
8d1f470
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
license: apache-2.0
datasets:
- BAAI/IndustryCorpus2
- BAAI/IndustryCorpus2_medicine_health_psychology_traditional_chinese_medicine
base_model:
- meta-llama/Meta-Llama-3-8B
language:
- zh
- en
---

This model uses llama3-8b as the base model and uses the [BAAI/IndustryCorpus2](https://huggingface.co/datasets/BAAI/IndustryCorpus2) dataset for data matching and domain pre-training to obtain a medical field pre-training model with Chinese and English capabilities.


## trainig details

To gradually align the data distribution between pre-training and fine-tuning and minimize the loss of knowledge acquired during pre-training, we design a novel two-stage CPT strategy. This approach ensures a stable integration of medical knowledge into the LLM.

### Stable CPT

To balance medical domain knowledge with general knowledge, we first implement a Stable CPT stage, which ensures the model maintains and enhances its general language understanding while concentrating on medical information. In this stage, we combine a high-quality medical pre-training corpus with general data via the ratio as 19:1, with a token-level distribution of 1:9 for Chinese:English. 

### Boost CPT
To integrate medical knowledge during the model pre-training phase and facilitate a smooth transition to domain-specific tasks, we then design a Boost CPT phase. In this phase, we combine a very high-quality medical pre-training corpus with open-source medical SFT data at a 1:1 ratio, with a token-level distribution of 4:6 for Chinese:English. Notably, throughout these two phases, we progressively increase the proportion of Chinese data.

## Model Evaluation result

we evaluate our CPT model, CareBot, on seven common medical benchmarks. Considering that our goal is to train a medical model that performs well in both Chinese and English, we strive to improve the Chinese medical ability while ensuring that the English medical ability of the model is slightly reduced. We observe that for English benchmarks (MMLU-Med, PubMedQA, MedQA, MedMCQA), the performance of CareBot (Stable CPT) and CareBot (Stable CPT \& Boost CPT) shows a slight decrease. This is expected, given that the LLaMA-8B-base model already has strong English capabilities. However, for Chinese benchmarks (C-Eval-Med, CMMLU-Med, CMB), our models demonstrate significant improvements, with particularly notable gains in models trained using the two-stage approach. This confirms that our two-stage CPT strategy effectively integrates medical domain knowledge into the model, resulting in robust enhancements to its Chinese medical capabilities.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/642f6c64f945a8a5c9ee5b5d/gOhf0VMsUnb99OpNrPN9_.png)

bellow  is the metric details

![image/png](https://cdn-uploads.huggingface.co/production/uploads/642f6c64f945a8a5c9ee5b5d/5JxKlPb3JL48gA8l3bzsT.png)

## Citation

@misc{
      title={CareBot: A Pioneering Full-Process Open-Source Medical Language Model}, 
      author={Lulu Zhao and Weihao Zeng and Xiaofeng Shi and Hua Zhou and Yonghua Lin},
      year={2024},
      eprint={},
      archivePrefix={arXiv},
      primaryClass={}
}