--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: EleutherAI/polyglot-ko-1.3b model-index: - name: pretrain_w-cot_w-asd results: [] --- # pretrain_w-cot_w-asd This model is a fine-tuned version of [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2650 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 0.6959 | 0.1725 | 1000 | 0.2915 | | 0.2871 | 0.3450 | 2000 | 0.2810 | | 0.2737 | 0.5174 | 3000 | 0.2769 | | 0.2759 | 0.6899 | 4000 | 0.2737 | | 0.2708 | 0.8624 | 5000 | 0.2717 | | 0.2695 | 1.0349 | 6000 | 0.2716 | | 0.2673 | 1.2074 | 7000 | 0.2713 | | 0.2697 | 1.3798 | 8000 | 0.2694 | | 0.2658 | 1.5523 | 9000 | 0.2682 | | 0.2674 | 1.7248 | 10000 | 0.2673 | | 0.2641 | 1.8973 | 11000 | 0.2664 | | 0.2626 | 2.0698 | 12000 | 0.2666 | | 0.2612 | 2.2422 | 13000 | 0.2662 | | 0.266 | 2.4147 | 14000 | 0.2655 | | 0.2626 | 2.5872 | 15000 | 0.2654 | | 0.2637 | 2.7597 | 16000 | 0.2652 | | 0.2623 | 2.9322 | 17000 | 0.2650 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1