Finetuned_TinyLlama
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on an unknown dataset.
Model description
This model was made with this tutorial by Noa, you can find a more complete model and demo at nroggendorff/mayo
Limitations
- The model is easily gaslit
- It is uncensored, and there are no safety features.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
Training results
- training_loss=2.0859998975481306
Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for not-lain/Finetuned_TinyLlama
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0