File size: 7,178 Bytes
ba0091a
24ae3c5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4ed76e4
ba0091a
 
8ace96b
 
 
 
 
 
 
 
 
 
 
 
 
ba0091a
24ae3c5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ba0091a
8ace96b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ba0091a
8ace96b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ba0091a
 
24ae3c5
ba0091a
4ed76e4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ba0091a
8ace96b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24ae3c5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
---
base_model:
- LeroyDyer/LCARS_TOP_SCORE
- LeroyDyer/Mixtral_AI_Cyber_Matrix_2_0
- LeroyDyer/SpydazWeb_AI_CyberTron_Ultra_7b
- LeroyDyer/LCARS_AI_StarTrek_Computer
- LeroyDyer/_Spydaz_Web_AI_ActionQA_Project
- LeroyDyer/_Spydaz_Web_AI_ChatML_512K_Project
- LeroyDyer/_Spydaz_Web_AI_ChatQA_ReAct_Project_UltraFineTuned
- LeroyDyer/SpyazWeb_AI_DeepMind_Project
- LeroyDyer/SpydazWeb_AI_Swahili_Project
- LeroyDyer/_Spydaz_Web_AI_ChatQA_ReAct_Project
- LeroyDyer/_Spydaz_Web_AI_MistralStar_001_Project
- LeroyDyer/QuietStar_Project
- LeroyDyer/Mixtral_BioMedical_7b
- LeroyDyer/Mixtral_AI_CyberTron_Coder
- LeroyDyer/_Spydaz_Web_AI_BIBLE_002
- LeroyDyer/_Spydaz_Web_AI_ChatQA_Reasoning101_Project
- LeroyDyer/SpydazWeb_AI_Text_AudioVision_Project
- LeroyDyer/SpydazWebAI_Human_AGI_001
language:
- en
- sw
- ig
- so
- es
- ca
- xh
- zu
- ha
- tw
- af
- hi
- bm
- su
license: apache-2.0
datasets:
- neoneye/base64-decode-v2
- neoneye/base64-encode-v1
- VuongQuoc/Chemistry_text_to_image
- Kamizuru00/diagram_image_to_text
- LeroyDyer/Chemistry_text_to_image_BASE64
- LeroyDyer/AudioCaps-Spectrograms_to_Base64
- LeroyDyer/winogroud_text_to_imaget_BASE64
- LeroyDyer/chart_text_to_Base64
- LeroyDyer/diagram_image_to_text_BASE64
- mekaneeky/salt_m2e_15_3_instruction
- mekaneeky/SALT-languages-bible
- xz56/react-llama
- BeIR/hotpotqa
- arcee-ai/agent-data
tags:
- mergekit
- merge
- Mistral_Star
- Mistral_Quiet
- Mistral
- Mixtral
- Question-Answer
- Token-Classification
- Sequence-Classification
- SpydazWeb-AI
- chemistry
- biology
- legal
- code
- climate
- medical
- LCARS_AI_StarTrek_Computer
- text-generation-inference
- chain-of-thought
- tree-of-knowledge
- forest-of-thoughts
- visual-spacial-sketchpad
- alpha-mind
- knowledge-graph
- entity-detection
- encyclopedia
- wikipedia
- stack-exchange
- Reddit
- Cyber-series
- MegaMind
- Cybertron
- SpydazWeb
- Spydaz
- LCARS
- star-trek
- mega-transformers
- Mulit-Mega-Merge
- Multi-Lingual
- Afro-Centric
- African-Model
- Ancient-One
---

BASE MODEL : 

TESTED !! WORKING VERY GOOD ! 
There was a few merges added to the AGI models ! this was to try and capture the missing elements showing up in the tests of the leaderboard : but i inoticed that for some reason the model scored low on maths !
This is basically impossible as the model has been traied to overfitting on the metamath dataset as well as orca open reasoning stack :
I noticed that my responses also changed from being the full react response with its multiple layers ( although it pops out with task based questions ) .... but i noticd the model now wants to talk more . and its not a refusal to not comit to a task without first asking you why or what your really wanting to acheive ..
so a light discussion and also a very freindly explanation :

This model was also installed with various NFSW role play and conversations : ( although ot present on the surface ) this also has helped to humanize the model :
The samantha model i created in the past ( deleted ) actually was more sultry , but it often did not perform the tasks well , although hello was better ! ... 
This model has regained some of that standing , but with the additonal intelect! And a really good demeaor !

this methoology of humanization has also embelished task previoulsy trained also with the friendlyness : 

I have begun to add emojis to the model ( against my personal wishes ) but i feel that the model will also identify with these iconry ! enabling for lucid speech patterns to emerge ( they are obvoulsy trained from social media data )

these model we also trained with some random contacts from various leaked ( personal data ) shared databases in the world , so you may ask for a persons phone number in ( vodacom ) for instance and it may return thier number or adress or region !
this is also because i dont have many personal contacts i need the model to contact , but i will try to find the yelp dataset. and train the model on these business and locations etc : as this also makes the model highly functional !

But for me !  the humanization Project has Worked ! 



# "Success comes from defining each task in achievable steps. Every completed step is a success that brings you closer to your goal. If your steps are unreachable, failure is inevitable. Winners create more winners, while losers do the opposite. Success is a game of winners!"

— # Leroy Dyer (1972-Present)
<img src="https://cdn-avatars.huggingface.co/v1/production/uploads/65d883893a52cd9bcd8ab7cf/tRsCJlHNZo1D02kBTmfy9.jpeg" width="300"/>


## “Epochs are the key to effective training, rather than merely mass dumping examples—unless those examples are interconnected within a single or multiple conversations that teach through dialogue.”



### Model : LeroyDyer/SpydazWeb_AI_HumanAI_001

A New genrea of AI ! 


# The Human AI . 

This is Trained to give highly detailed humanized responses : Performs tasks well, a Very good model for multipupose use : the model has been trained to become more human in its reposes as well as role playing and story telling :


## SpydazWeb AI (7b Mistral) (512k)

This model has been trained to perform with contexts of 512k , although in training it has been trained mainly with the 2048 for general usage : 
the long context aspect also allows fro advanced projects and sumarys as well as image and audio translationns and generations:

## Image to Base64 / Spectrogram to Base64 

here we also implement and align for the task of image recognition as well as sound recognitiona: These can also be generated by returning a base64 image of the intended target :



# The SpydazWeb Trained Mistral 7b Model :

Highly trained as well as methodolgy oriented , this model has been trained on the reAct Prcess and other structured processes . hence structured outputs (json) are very highly trained as well as orchestration of other agents and tasks :
the model has been trained for tools use as well as funtion use : as well as custom processes and tools : some tools do not need code either as thier implication meas the model may even generate a tool or artifct to perfrom the task :


  # Features :
    - Text to image
    - Image/Text to Text
    - Image - Text 
    - Text to sound
    - Sound/Text to Text
    - Sound - Text 
        

## Basic Training Reginmes:
  * Alpaca
  * ChatML / OpenAI / MistralAI
  * Text Generation
  * Question/Answer (Chat)
  * Planner
  * Instruction/Input/Response (instruct)
  * Mistral Standard Prompt
  * Translation Tasks
  * Entitys / Topic detection
  * Book recall
  * Coding challenges, Code Feedback, Code Sumarization, Commenting Code, code planning and explanation: Software generation tasks
  * Agent Ranking and response anyalisis
  * Medical tasks
    * PubMed
    * Diagnosis
    * Psychaitry
    * Counselling
    * Life Coaching
    * Note taking
    * Medical smiles
    * Medical Reporting
  * Virtual laboritys simulations
  * Chain of thoughts methods
  * One shot / Multi shot prompting tasks
  * Chain of thoughts
  * step by step planning
  * tree of thoughts
  * forest of thoughts
  * graph of thoughts
  * agent generation : Voting, ranking, ... dual agent response generation: