LLENN-v0.75-Qwen2.5-72b
I liked the previous model, but didn't exactly liked the claude vibes it's giving me. So I removed magnum. Other than that, there isn't any new model to merge in so the rest is kept as-is.
Please do not ask for quants, contact others instead.
All models are ready for testing on featherless.ai as soon as it goes live.
Models Merged
The following models were included in the merge:
- rombodawg/Rombos-LLM-V2.5-Qwen-72b
- abacusai/Dracarys2-72B-Instruct
- EVA-UNIT-01/EVA-Qwen2.5-72B-v0.0
- ZeusLabs/Chronos-Platinum-72B
- m8than/banana-2-b-72b
Configuration
The following YAML configuration was used to produce this model:
models:
- model: EVA-UNIT-01/EVA-Qwen2.5-72B-v0.0
- model: ZeusLabs/Chronos-Platinum-72B
- model: abacusai/Dracarys2-72B-Instruct
- model: rombodawg/Rombos-LLM-V2.5-Qwen-72b
- model: m8than/banana-2-b-72b
merge_method: model_stock
base_model: Qwen/Qwen2.5-72B
parameters:
normalize: true
dtype: bfloat16
Prompt Format
ChatML works for the most part.
Sampler Settings
Personally I use the following:
Temp: 1.2
Min P: 0.07
Rep Pen: 1.1
Others have suggested the following:
Temp: 1.1
Top P: 0.98
Min P: 0.05
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.