|
--- |
|
license: llama2 |
|
datasets: |
|
- databricks/databricks-dolly-15k |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
--- |
|
# Instruct_Llama70B_Dolly15k |
|
Fine-tuned from Llama-2-70B,used Dolly15k for the dataset. 80% for training, 15% validation, 5% test. Trained for 1.5 epochs using QLora. Trained with 1024 context window. |
|
|
|
# Model Details |
|
* **Trained by**: trained by [Brillibits](https://www.youtube.com/channel/UCAq9THVHhPK0Zv4Xi-88Jmg). |
|
* **Model type:** **Instruct_Llama70B_Dolly15k** is an auto-regressive language model based on the Llama 2 transformer architecture. |
|
* **Language(s)**: English |
|
* **License for Instruct_Llama70B_Dolly15ks**: llama2 license |
|
|
|
|
|
# Prompting |
|
|
|
## Prompt Template With Context |
|
|
|
``` |
|
Write a 10-line poem about a given topic |
|
|
|
Input: |
|
|
|
The topic is about racecars |
|
|
|
Output: |
|
``` |
|
## Prompt Template Without Context |
|
``` |
|
Who was the was the second president of the United States? |
|
|
|
Output: |
|
``` |
|
|
|
## Professional Assistance |
|
This model and other models like it are great, but where LLMs hold the most promise is when they are applied on custom data to automate a wide variety of tasks |
|
|
|
If you have a dataset and want to see if you might be able to apply that data to automate some tasks, and you are looking for professional assistance, contact me [here](mailto:[email protected]) |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Brillibits__Instruct_Llama70B_Dolly15k) |
|
|
|
| Metric | Value | |
|
|-----------------------|---------------------------| |
|
| Avg. | 60.97 | |
|
| ARC (25-shot) | 68.34 | |
|
| HellaSwag (10-shot) | 87.21 | |
|
| MMLU (5-shot) | 69.52 | |
|
| TruthfulQA (0-shot) | 46.46 | |
|
| Winogrande (5-shot) | 84.29 | |
|
| GSM8K (5-shot) | 42.68 | |
|
| DROP (3-shot) | 28.26 | |
|
|