Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
mimicheng
/
zephyr-7b-sft-qlora-1ep-25jan
like
0
PEFT
Safetensors
HuggingFaceH4/ultrachat_200k
mixtral
dpo-experiment
Generated from Trainer
trl
sft
4-bit precision
bitsandbytes
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Use this model
8b92461
zephyr-7b-sft-qlora-1ep-25jan
/
eval_results.json
mimicheng
Model save
2af5144
verified
8 months ago
raw
Copy download link
history
blame
No virus
171 Bytes
{
"epoch"
:
1.0
,
"eval_loss"
:
NaN
,
"eval_runtime"
:
1929.665
,
"eval_samples"
:
23110
,
"eval_samples_per_second"
:
7.997
,
"eval_steps_per_second"
:
1.0
}