--- license: mit base_model: microsoft/Phi-3-mini-4k-instruct tags: - generated_from_trainer metrics: - accuracy model-index: - name: Phi-3-mini-4k-instruct-mbti results: [] --- [Visualize in Weights & Biases](https://wandb.ai/zmhzmh/huggingface/runs/h3uz5tey) [Visualize in Weights & Biases](https://wandb.ai/zmhzmh/huggingface/runs/h3uz5tey) [Visualize in Weights & Biases](https://wandb.ai/zmhzmh/huggingface/runs/h3uz5tey) # Phi-3-mini-4k-instruct-mbti This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6615 - Accuracy: 0.6220 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.7155 | 0.1977 | 500 | 0.7196 | 0.5898 | | 0.6873 | 0.3955 | 1000 | 0.6776 | 0.5931 | | 0.6841 | 0.5932 | 1500 | 0.6620 | 0.6058 | | 0.6746 | 0.7909 | 2000 | 0.6615 | 0.6220 | | 0.6655 | 0.9886 | 2500 | 0.6647 | 0.6133 | | 0.6092 | 1.1864 | 3000 | 0.6873 | 0.5716 | | 0.5661 | 1.3841 | 3500 | 0.7262 | 0.6092 | | 0.5565 | 1.5818 | 4000 | 0.6938 | 0.6185 | | 0.5308 | 1.7795 | 4500 | 0.7100 | 0.6060 | | 0.5236 | 1.9773 | 5000 | 0.7046 | 0.6127 | ### Framework versions - Transformers 4.42.4 - Pytorch 2.3.1 - Datasets 2.20.0 - Tokenizers 0.19.1