Josephgflowers
commited on
Commit
•
6345a4e
1
Parent(s):
c2469c0
Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,17 @@ This model is a fine-tuned version of [Josephgflowers/TinyLlama-3T-Cinder-v1.2](
|
|
19 |
|
20 |
## Model description
|
21 |
|
22 |
-
This models is trained for RAG, Summary, Function Calling and Tool usage. Trained off of Cinder. Cinder is a chatbot designed for chat about STEM topics
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
|
25 |
See https://huggingface.co/Josephgflowers/TinyLlama-Cinder-Agent-Rag/blob/main/tinyllama_agent_cinder_txtai-rag.py
|
|
|
19 |
|
20 |
## Model description
|
21 |
|
22 |
+
This models is trained for RAG, Summary, Function Calling and Tool usage. Trained off of Cinder. Cinder is a chatbot designed for chat about STEM topics, space adventure RP and storytelling.
|
23 |
+
This model does well at the IFEval (following instuctions) for its size. It is great at summary and RAG. Due to the formatting of the Glaive function calling dataset the JSON
|
24 |
+
output is not what I was expecting for doing regular JSON dumps but does follow their standard strictly.
|
25 |
+
|
26 |
+
*********************************************
|
27 |
+
10 X the original tinyllama model on GSM8K!!!
|
28 |
+
*********************************************
|
29 |
+
|
30 |
+
To do this I started with all the normal open math datasets EG Orca Math, All the Meta Math, camel AI math qa, ect and as many reasoning datasets as I could make or find.
|
31 |
+
But what really made it go the extra mile was adding in TIGER-Lab/WebInstructSub along with all of the RAG and Summary data.
|
32 |
+
So special thanks to TIGER-Lab. I found that as math perfomance improved so did the model's ability to handle extracting relevant data in RAG.
|
33 |
|
34 |
|
35 |
See https://huggingface.co/Josephgflowers/TinyLlama-Cinder-Agent-Rag/blob/main/tinyllama_agent_cinder_txtai-rag.py
|