File size: 3,379 Bytes
30d7874 5d1ac2d 5c4ece7 8628592 5c4ece7 5d1ac2d 8628592 5c4ece7 8628592 30d7874 8628592 5c4ece7 8628592 5c4ece7 8628592 5c4ece7 8628592 5c4ece7 8628592 5c4ece7 8628592 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
---
language:
- nl
license: cc-by-nc-4.0
datasets:
- BramVanroy/alpaca-dolly-dutch
inference: false
base_model: ybelkada/falcon-7b-sharded-bf16
model-index:
- name: falcon-7b-ft-alpaca-cleaned-dutch
results: []
---
# falcon-7b-ft-alpaca-dolly-dutch
## Model description
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the [BramVanroy/alpaca-dolly-dutch](https://huggingface.co/datasets/BramVanroy/alpaca-dolly-dutch) dataset.
See the original [Falcon 7B model](https://huggingface.co/tiiuae/falcon-7b/) for more information, intended use, and biases.
## Intended uses & limitations
This model is intended as a (poor) baseline for Dutch generative LLMs. It by no means aims to provide SOTA performance and is specifically intended for research purposes, and an opportunity for me to test hyperparameters and stability.
Importantly, the original Falcon 7B model was only trained on English and French. Therefore, Dutch generations should be taken with a massive grain of salt.
## Training and evaluation data
Trained on the synthetic [BramVanroy/alpaca-dolly-dutch](https://huggingface.co/datasets/BramVanroy/alpaca-dolly-dutch) instruction dataset.
Therefore, commercial use of this model is forbidden. The model is intended for research purposes only.
## Training procedure
Trained with LoRA and merged before upload. The adapters are in the `adapters` branch.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- total_train_batch_size: 512
- total_eval_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8677 | 0.16 | 20 | 1.6766 |
| 1.5635 | 0.32 | 40 | 1.5643 |
| 1.6353 | 0.48 | 60 | 1.4980 |
| 1.5166 | 0.65 | 80 | 1.4516 |
| 1.4287 | 0.81 | 100 | 1.4096 |
| 1.5791 | 0.97 | 120 | 1.3802 |
| 1.3911 | 1.13 | 140 | 1.3633 |
| 1.356 | 1.29 | 160 | 1.3419 |
| 1.2524 | 1.45 | 180 | 1.3263 |
| 1.4224 | 1.61 | 200 | 1.3056 |
| 1.2266 | 1.77 | 220 | 1.2897 |
| 1.3242 | 1.94 | 240 | 1.2785 |
| 1.03 | 2.1 | 260 | 1.2957 |
| 1.1643 | 2.26 | 280 | 1.2970 |
| 1.1492 | 2.42 | 300 | 1.2779 |
| 1.0679 | 2.58 | 320 | 1.2770 |
| 1.2695 | 2.74 | 340 | 1.2658 |
| 1.0439 | 2.9 | 360 | 1.2612 |
| 0.9453 | 3.06 | 380 | 1.3157 |
| 0.8494 | 3.23 | 400 | 1.3189 |
| 1.0745 | 3.39 | 420 | 1.3073 |
| 0.8679 | 3.55 | 440 | 1.3019 |
| 1.0569 | 3.71 | 460 | 1.2955 |
| 1.0186 | 3.87 | 480 | 1.2890 |
| 0.8413 | 4.03 | 500 | 1.3445 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|