File size: 1,427 Bytes
16bcb3b 96f4aa7 06566f4 96f4aa7 06566f4 96f4aa7 16bcb3b 96f4aa7 16bcb3b 06566f4 16bcb3b 06566f4 16bcb3b 96f4aa7 16bcb3b 96f4aa7 16bcb3b 96f4aa7 16bcb3b 96f4aa7 16bcb3b 96f4aa7 16bcb3b 96f4aa7 16bcb3b 96f4aa7 16bcb3b 96f4aa7 16bcb3b 96f4aa7 06566f4 96f4aa7 06566f4 96f4aa7 16bcb3b 96f4aa7 16bcb3b 96f4aa7 16bcb3b 06566f4 96f4aa7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
license: other
base_model: 01-ai/Yi-1.5-6B
tags:
- llama-factory
- freeze
- generated_from_trainer
model-index:
- name: yi-1.5-6b-yub-vocab-expanded
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# yi-1.5-6b-yub-vocab-expanded
This model is a fine-tuned version of [01-ai/Yi-1.5-6B](https://huggingface.co/01-ai/Yi-1.5-6B) undergone layers freezeing learning on the 300m tokens Cantonese dataset, in order to train a new words embedding in the expanded vocab. This model has not been continued pre-trainined, therefore it is not recommended to be used for further pre-training.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.19.1
|