johannhartmann commited on
Commit
d86b4e7
1 Parent(s): 2a0703e

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +79 -0
  2. Wiedervereinigung-7b.png +0 -0
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - merge
4
+ - mergekit
5
+ - lazymergekit
6
+ - DiscoResearch/DiscoLM_German_7b_v1
7
+ - DRXD1000/Phoenix
8
+ - VAGOsolutions/SauerkrautLM-7b-v1-mistral
9
+ - malteos/hermeo-7b
10
+ base_model:
11
+ - DiscoResearch/DiscoLM_German_7b_v1
12
+ - DRXD1000/Phoenix
13
+ - VAGOsolutions/SauerkrautLM-7b-v1-mistral
14
+ - malteos/hermeo-7b
15
+ ---
16
+
17
+ # Wiedervereinigung-7b
18
+
19
+ ![image/png](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b/resolve/main/Wiedervereinigung-7b.png)
20
+
21
+ Wiedervereinigung-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
22
+ * [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1)
23
+ * [DRXD1000/Phoenix](https://huggingface.co/DRXD1000/Phoenix)
24
+ * [VAGOsolutions/SauerkrautLM-7b-v1-mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral)
25
+ * [malteos/hermeo-7b](https://huggingface.co/malteos/hermeo-7b)
26
+
27
+ ## 🧩 Configuration
28
+
29
+ ```yaml
30
+ models:
31
+ - model: LeoLM/leo-mistral-hessianai-7b
32
+ # No parameters necessary for base model
33
+ - model: DiscoResearch/DiscoLM_German_7b_v1
34
+ parameters:
35
+ density: 0.6
36
+ weight: 0.25
37
+ - model: DRXD1000/Phoenix
38
+ parameters:
39
+ density: 0.6
40
+ weight: 0.25
41
+ - model: VAGOsolutions/SauerkrautLM-7b-v1-mistral
42
+ parameters:
43
+ density: 0.6
44
+ weight: 0.25
45
+ - model: malteos/hermeo-7b
46
+ parameters:
47
+ density: 0.6
48
+ weight: 0.25
49
+ merge_method: dare_ties
50
+ base_model: LeoLM/leo-mistral-hessianai-7b
51
+ parameters:
52
+ int8_mask: true
53
+ dtype: bfloat16
54
+ ```
55
+
56
+ ## 💻 Usage
57
+
58
+ ```python
59
+ !pip install -qU transformers accelerate
60
+
61
+ from transformers import AutoTokenizer
62
+ import transformers
63
+ import torch
64
+
65
+ model = "mayflowergmbh/Wiedervereinigung-7b"
66
+ messages = [{"role": "user", "content": "What is a large language model?"}]
67
+
68
+ tokenizer = AutoTokenizer.from_pretrained(model)
69
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
70
+ pipeline = transformers.pipeline(
71
+ "text-generation",
72
+ model=model,
73
+ torch_dtype=torch.float16,
74
+ device_map="auto",
75
+ )
76
+
77
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
78
+ print(outputs[0]["generated_text"])
79
+ ```
Wiedervereinigung-7b.png ADDED