Update README.md
Browse files
README.md
CHANGED
@@ -17,11 +17,12 @@ base_model:
|
|
17 |
|
18 |
seldonium-2x7b-MoE-v0.1-coder-logic is a Mixture of Experts (MoE) model that combines the capabilities of two specialized language models:
|
19 |
|
20 |
-
|
21 |
|
22 |
-
|
23 |
|
24 |
-
This MoE model was created using the
|
|
|
25 |
|
26 |
## 🧩 Configuration
|
27 |
|
|
|
17 |
|
18 |
seldonium-2x7b-MoE-v0.1-coder-logic is a Mixture of Experts (MoE) model that combines the capabilities of two specialized language models:
|
19 |
|
20 |
+
Locutusque/Hercules-4.0-Mistral-v0.2-7B: A 7B parameter model focused on programming tasks, such as writing functions, implementing algorithms, and working with data structures.
|
21 |
|
22 |
+
Open-Orca/Mistral-7B-OpenOrca: A 7B parameter model focused on logical reasoning and analysis, including solving logic problems, evaluating arguments, and assessing the validity of statements.
|
23 |
|
24 |
+
This MoE model was created using the LazyMergekit colab, which allows for efficient combination of specialized models to produce a more capable and efficient overall model.
|
25 |
+
The seldonium-2x3b-MoE-v0.1 can be used for a variety of natural language processing tasks that benefit from the complementary strengths of its expert components.
|
26 |
|
27 |
## 🧩 Configuration
|
28 |
|