CultriX commited on
Commit
aabd58c
1 Parent(s): 3ade048

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -2
README.md CHANGED
@@ -23,12 +23,18 @@ However I do not want to mislead anybody or produce any unfair scores, hence thi
23
  The full training configuration is also fully transparant and can be found below.
24
 
25
  Hope this model will prove useful.
26
- There's GGUF versions available here: https://huggingface.co/CultriX/MergeTrix-7B-GGUF
27
 
28
  Kind regards,
29
  CultriX
30
 
31
- # MergeTrix-7B
 
 
 
 
 
 
32
 
33
  MergeTrix-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
34
  * [abideen/NexoNimbus-7B](https://huggingface.co/abideen/NexoNimbus-7B)
 
23
  The full training configuration is also fully transparant and can be found below.
24
 
25
  Hope this model will prove useful.
26
+ There's GGUF versions available here for inference: https://huggingface.co/CultriX/MergeTrix-7B-GGUF
27
 
28
  Kind regards,
29
  CultriX
30
 
31
+ # Shoutout
32
+ Once again, a major thank you and shoutout to @mlabonne for his amazing article that I used to produce this result which can be found here: https://towardsdatascience.com/merge-large-language-models-with-mergekit-2118fb392b54
33
+ My other model, CultriX/MistralTrix-v1, was based on another great article from the same guy, which can be found here: https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac
34
+ (I hope he doesn't mind me using his own articles to beat him on the LeaderBoards for the second time this week... Like last time, all credit should be directed at him really!)
35
+
36
+ # MODEL INFORMATION:
37
+ # NAME: MergeTrix-7B
38
 
39
  MergeTrix-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
40
  * [abideen/NexoNimbus-7B](https://huggingface.co/abideen/NexoNimbus-7B)