File size: 3,255 Bytes
e2e36fe 70c6aad e2e36fe f0405b8 e2e36fe d12161e e2e36fe 7e993d1 e2e36fe 7e993d1 e2e36fe 7e993d1 e2e36fe 7e993d1 e2e36fe 7e993d1 e2e36fe 6345a4e e2e36fe 7e993d1 e2e36fe 70c6aad c2469c0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
license: mit
tags:
- generated_from_trainer
base_model: Josephgflowers/TinyLlama-Cinder-Tiny-Agent
model-index:
- name: TinyLlama-Cinder-Agent-v1
results: []
---
The goal of this Model is to build a Tinyllama model that can be used for tool usage, RAG, take system instructions, and as a general assistant.
This model is a fine-tuned version of [Josephgflowers/TinyLlama-Cinder-Tiny-Agent](https://huggingface.co/Josephgflowers/TinyLlama-Cinder-Tiny-Agent).
Special Thanks to https://nationtech.io/ for their generous sponorship in training this model.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6328952f798f8d122ce62a44/MbN_SXChmMxuHO8GjdUSc.png)
This model is a fine-tuned version of [Josephgflowers/TinyLlama-3T-Cinder-v1.2](https://huggingface.co/Josephgflowers/TinyLlama-3T-Cinder-v1.2) on https://huggingface.co/datasets/Josephgflowers/agent_1.
## Model description
This models is trained for RAG, Summary, Function Calling and Tool usage. Trained off of Cinder. Cinder is a chatbot designed for chat about STEM topics, space adventure RP and storytelling.
This model does well at the IFEval (following instuctions) for its size. It is great at summary and RAG. Due to the formatting of the Glaive function calling dataset the JSON
output is not what I was expecting for doing regular JSON dumps but does follow their standard strictly.
*********************************************
10 X the original tinyllama model on GSM8K!!!
*********************************************
To do this I started with all the normal open math datasets EG Orca Math, All the Meta Math, camel AI math qa, ect and as many reasoning datasets as I could make or find.
But what really made it go the extra mile was adding in TIGER-Lab/WebInstructSub along with all of the RAG and Summary data.
So special thanks to TIGER-Lab. I found that as math perfomance improved so did the model's ability to handle extracting relevant data in RAG.
See https://huggingface.co/Josephgflowers/TinyLlama-Cinder-Agent-Rag/blob/main/tinyllama_agent_cinder_txtai-rag.py
For usage example with wiki rag.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__TinyLlama-Cinder-Agent-v1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |39.17|
|AI2 Reasoning Challenge (25-Shot)|34.90|
|HellaSwag (10-Shot) |53.87|
|MMLU (5-Shot) |26.89|
|TruthfulQA (0-shot) |39.08|
|Winogrande (5-shot) |59.12|
|GSM8k (5-shot) |21.15|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Josephgflowers__TinyLlama-Cinder-Agent-v1)
| Metric |Value|
|-------------------|----:|
|Avg. | 5.82|
|IFEval (0-Shot) |26.70|
|BBH (3-Shot) | 3.80|
|MATH Lvl 5 (4-Shot)| 0.38|
|GPQA (0-shot) | 0.00|
|MuSR (0-shot) | 2.23|
|MMLU-PRO (5-shot) | 1.79|
|