license: apache-2.0
language:
- en
Model Description
This model uses the Slerp
merge method from the best models on 14th Dec on the OpenLLM Leaderboard:
- base model: GreenNode/GreenNodeLM-7B-v1olet
The yaml config file for this model is here:
slices:
- sources:
- model: viethq188/LeoScorpius-7B-Chat-DPO
layer_range: [0, 32]
- model: GreenNode/GreenNodeLM-7B-v1olet
layer_range: [0, 32]
merge_method: slerp
base_model: GreenNode/GreenNodeLM-7B-v1olet
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
Run this model
You can run this model using Jan on Mac, Windows, or Linux.
** Jan is an open source, ChatGPT alternative that is:**
π» 100% offline on your machine: Your conversations remain confidential, and visible only to you.
ποΈ An Open File Format: Conversations and model settings stay on your computer and can be exported or deleted at any time.
π OpenAI Compatible: Local server on port 1337
with OpenAI compatible endpoints
π Open Source & Free: We build in public; check out our Github
- Please use the trinity-v1-GGUF when using on Jan.
Jan Model Merger
This is a test project for merging models.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here.
Metric | Value |
---|---|
Avg. | ? |
ARC (25-shot) | ? |
HellaSwag (10-shot) | ? |
MMLU (5-shot) | ? |
TruthfulQA (0-shot) | ? |
Winogrande (5-shot) | ? |
GSM8K (5-shot) | ? |