LeroyDyer commited on
Commit
d00865a
1 Parent(s): 943c0f9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -63
README.md CHANGED
@@ -143,6 +143,49 @@ Quote for Motivation:
143
 
144
  # "To grow as a professional, set goals just beyond your current abilities. Achieving these milestones will not only overcome obstacles but also strengthen your skillset. If your tasks are too easy, you’ll never challenge yourself or improve, and life will pass you by!"
145
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
146
 
147
 
148
  ## THE REFINED CHAT MODEL :
@@ -260,66 +303,3 @@ Final Thought: Summarize your reasoning and provide a clear answer to the questi
260
 
261
 
262
 
263
- ## Training Reginmes:
264
- * Alpaca
265
- * ChatML / OpenAI / MistralAI
266
- * Text Generation
267
- * Question/Answer (Chat)
268
- * Planner
269
- * Instruction/Input/Response (instruct)
270
- * Mistral Standard Prompt
271
- * Translation Tasks
272
- * Entitys / Topic detection
273
- * Book recall
274
- * Coding challenges, Code Feedback, Code Sumarization, Commenting Code, code planning and explanation: Software generation tasks
275
- * Agent Ranking and response anyalisis
276
- * Medical tasks
277
- * PubMed
278
- * Diagnosis
279
- * Psychaitry
280
- * Counselling
281
- * Life Coaching
282
- * Note taking
283
- * Medical smiles
284
- * Medical Reporting
285
- * Virtual laboritys simulations
286
- * Chain of thoughts methods
287
- * One shot / Multi shot prompting tasks
288
-
289
- ### General Intenal Methods:
290
-
291
- Trained for multi-task operations as well as rag and function calling :
292
-
293
- This model is a fully functioning model and is fully uncensored:
294
-
295
- the model has been trained on multiple datasets on the huggingface hub and kaggle :
296
-
297
- the focus has been mainly on methodology :
298
-
299
- * Chain of thoughts
300
- * step by step planning
301
- * tree of thoughts
302
- * forest of thoughts
303
- * graph of thoughts
304
- * agent generation : Voting, ranking, ... dual agent response generation:
305
-
306
-
307
- # Training Philosophy
308
-
309
- Here are some of the benefits you might experience by prioritizing attention mechanisms during fine-tuning:
310
-
311
- ## Enhanced Contextual Understanding:
312
-
313
- Fine-tuning attention layers helps the model better grasp the relationships and dependencies within the input data, leading to more contextually relevant and accurate outputs.
314
- ## Improved Control over Generation:
315
-
316
- You gain more control over the model's generation process, guiding it to focus on specific aspects of the input and produce outputs that align with your desired goals.
317
- ## More Creative and Diverse Outputs:
318
-
319
- By refining the attention mechanism, you can encourage the model to explore a wider range of possibilities and generate more creative and diverse responses.
320
- ## Reduced Overfitting:
321
-
322
- Fine-tuning with a focus on attention can help prevent overfitting to specific patterns in the training data, leading to better generalization and more robust performance on new inputs.
323
-
324
- # “Epochs are the key to effective training, rather than merely mass dumping examples—unless those examples are interconnected within a single or multiple conversations that teach through dialogue.”
325
-
 
143
 
144
  # "To grow as a professional, set goals just beyond your current abilities. Achieving these milestones will not only overcome obstacles but also strengthen your skillset. If your tasks are too easy, you’ll never challenge yourself or improve, and life will pass you by!"
145
 
146
+ ### General Intenal Methods:
147
+
148
+ Trained for multi-task operations as well as rag and function calling :
149
+
150
+ This model is a fully functioning model and is fully uncensored:
151
+
152
+ the model has been trained on multiple datasets on the huggingface hub and kaggle :
153
+
154
+ the focus has been mainly on methodology :
155
+
156
+ * Chain of thoughts
157
+ * step by step planning
158
+ * tree of thoughts
159
+ * forest of thoughts
160
+ * graph of thoughts
161
+ * agent generation : Voting, ranking, ... dual agent response generation:
162
+
163
+
164
+ ## Training Reginmes:
165
+ * Alpaca
166
+ * ChatML / OpenAI / MistralAI
167
+ * Text Generation
168
+ * Question/Answer (Chat)
169
+ * Planner
170
+ * Instruction/Input/Response (instruct)
171
+ * Mistral Standard Prompt
172
+ * Translation Tasks
173
+ * Entitys / Topic detection
174
+ * Book recall
175
+ * Coding challenges, Code Feedback, Code Sumarization, Commenting Code, code planning and explanation: Software generation tasks
176
+ * Agent Ranking and response anyalisis
177
+ * Medical tasks
178
+ * PubMed
179
+ * Diagnosis
180
+ * Psychaitry
181
+ * Counselling
182
+ * Life Coaching
183
+ * Note taking
184
+ * Medical smiles
185
+ * Medical Reporting
186
+ * Virtual laboritys simulations
187
+ * Chain of thoughts methods
188
+ * One shot / Multi shot prompting tasks
189
 
190
 
191
  ## THE REFINED CHAT MODEL :
 
303
 
304
 
305