license: apache-2.0 | |
library_name: transformers | |
tags: | |
- experimental | |
- peft | |
- rslora | |
# Model Card for Model ID | |
This is a model with altered parameters from a mergekit slice of [SciPhi/SciPhi-Self-RAG-Mistral-7B-32k](https://huggingface.co/SciPhi/SciPhi-Self-RAG-Mistral-7B-32k). | |
## Model Details | |
### Model Description | |
This model is an experimental model using minimal slices to gather core model properties that can be further trained. | |
The parameters have been reduced to just under 600 million. This is an experiment to see how far slicing can be taken while retaining original weight associations. | |
The model will be used for layer analysis and trained on a close approximation of the sciphi datasets using trainable parameters to see what original weights might be usable. | |
This process will be ongoing to see if rank stabilized tuning can save and enhance the original model information through recognizing original weight associations in the preserved layers, even after model resizing. | |
### Process | |
These models are merged with LoRA versions at each training run to consolidate weights, and the merged model is used as a base model for the next training. | |
The LoRA model can be found here: [jtatman/sciphi-mini-600m-unsloth-lora-v2](https://huggingface.co/jtatman/sciphi-mini-600m-unsloth-lora-v2) | |
The model is trained using [unsloth](https://github.com/unslothai/unsloth). Unsloth can be integrated in both supervised fine-tuning and direct preference optimizations through huggingface, using the TRL library. |