|
--- |
|
library_name: peft |
|
base_model: unsloth/llama-3-8b-bnb-4bit |
|
license: apache-2.0 |
|
language: |
|
- en |
|
pipeline_tag: zero-shot-classification |
|
tags: |
|
- subjectiviy |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
|
|
|
|
## Model Details |
|
|
|
### Model Hyperparameters |
|
|
|
args = TrainingArguments( |
|
per_device_train_batch_size = 2, |
|
gradient_accumulation_steps = 4, |
|
warmup_steps = 5, |
|
num_train_epochs = 12, |
|
learning_rate = 5e-5, |
|
fp16 = not torch.cuda.is_bf16_supported(), |
|
bf16 = torch.cuda.is_bf16_supported(), |
|
logging_steps = 10, |
|
optim = "adamw_8bit", |
|
weight_decay = 0.001, |
|
lr_scheduler_type = "linear", |
|
seed = 3407, |
|
output_dir = "outputs", |
|
report_to = "none",) |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
|
|
|
|
## Citation [optional] |
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
[More Information Needed] |
|
|
|
|
|
### Framework versions |
|
|
|
- PEFT 0.11.1 |