Text Generation
Transformers
Safetensors
mistral
Merge
mergekit
lazymergekit
liminerity/M7-7b
Kukedlc/NeuralSirKrishna-7b
Kukedlc/MyModelsMerge-7b
AurelPx/Percival_01-7b-slerp
MatthieuJ/Jason1903_SLERP
MTSAIR/multi_verse_model
Gille/StrangeMerges_30-7B-slerp
chihoonlee10/T3Q-Mistral-Orca-Math-DPO
yam-peleg/Experiment28-7B
mlabonne/UltraMerge-7B
text-generation-inference
Inference Endpoints
tags: | |
- merge | |
- mergekit | |
- lazymergekit | |
- liminerity/M7-7b | |
- Kukedlc/NeuralSirKrishna-7b | |
- Kukedlc/MyModelsMerge-7b | |
- AurelPx/Percival_01-7b-slerp | |
- MatthieuJ/Jason1903_SLERP | |
- MTSAIR/multi_verse_model | |
- Gille/StrangeMerges_30-7B-slerp | |
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO | |
- yam-peleg/Experiment28-7B | |
- mlabonne/UltraMerge-7B | |
base_model: | |
- liminerity/M7-7b | |
- Kukedlc/NeuralSirKrishna-7b | |
- Kukedlc/MyModelsMerge-7b | |
- AurelPx/Percival_01-7b-slerp | |
- MatthieuJ/Jason1903_SLERP | |
- MTSAIR/multi_verse_model | |
- Gille/StrangeMerges_30-7B-slerp | |
- chihoonlee10/T3Q-Mistral-Orca-Math-DPO | |
- yam-peleg/Experiment28-7B | |
- mlabonne/UltraMerge-7B | |
# SomeModelsMerge-7b | |
SomeModelsMerge-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): | |
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b) | |
* [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b) | |
* [Kukedlc/MyModelsMerge-7b](https://huggingface.co/Kukedlc/MyModelsMerge-7b) | |
* [AurelPx/Percival_01-7b-slerp](https://huggingface.co/AurelPx/Percival_01-7b-slerp) | |
* [MatthieuJ/Jason1903_SLERP](https://huggingface.co/MatthieuJ/Jason1903_SLERP) | |
* [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model) | |
* [Gille/StrangeMerges_30-7B-slerp](https://huggingface.co/Gille/StrangeMerges_30-7B-slerp) | |
* [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO) | |
* [yam-peleg/Experiment28-7B](https://huggingface.co/yam-peleg/Experiment28-7B) | |
* [mlabonne/UltraMerge-7B](https://huggingface.co/mlabonne/UltraMerge-7B) | |
## 🧩 Configuration | |
```yaml | |
models: | |
- model: liminerity/M7-7b | |
# no parameters necessary for base model | |
- model: liminerity/M7-7b | |
parameters: | |
weight: 0.2 | |
density: 0.88 | |
- model: Kukedlc/NeuralSirKrishna-7b | |
parameters: | |
weight: 0.1 | |
density: 0.66 | |
- model: Kukedlc/MyModelsMerge-7b | |
parameters: | |
weight: 0.1 | |
density: 0.66 | |
- model: AurelPx/Percival_01-7b-slerp | |
parameters: | |
weight: 0.1 | |
density: 0.33 | |
- model: MatthieuJ/Jason1903_SLERP | |
parameters: | |
weight: 0.1 | |
density: 0.33 | |
- model: MTSAIR/multi_verse_model | |
parameters: | |
weight: 0.1 | |
density: 0.66 | |
- model: Gille/StrangeMerges_30-7B-slerp | |
parameters: | |
weight: 0.1 | |
density: 0.55 | |
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO | |
parameters: | |
weight: 0.1 | |
density: 0.22 | |
- model: yam-peleg/Experiment28-7B | |
parameters: | |
weight: 0.1 | |
density: 0.44 | |
- model: mlabonne/UltraMerge-7B | |
parameters: | |
weight: 0.1 | |
density: 0.77 | |
merge_method: dare_ties | |
base_model: liminerity/M7-7b | |
parameters: | |
int8_mask: true | |
normalize: true | |
dtype: bfloat16 | |
``` | |
## 💻 Usage | |
```python | |
!pip install -qU transformers accelerate | |
from transformers import AutoTokenizer | |
import transformers | |
import torch | |
model = "Kukedlc/SomeModelsMerge-7b" | |
messages = [{"role": "user", "content": "What is a large language model?"}] | |
tokenizer = AutoTokenizer.from_pretrained(model) | |
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) | |
pipeline = transformers.pipeline( | |
"text-generation", | |
model=model, | |
torch_dtype=torch.float16, | |
device_map="auto", | |
) | |
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) | |
print(outputs[0]["generated_text"]) | |
``` |