|
--- |
|
license: gpl |
|
language: |
|
- en |
|
tags: |
|
- starcoder |
|
- wizardcoder |
|
- code |
|
- self-instruct |
|
- distillation |
|
--- |
|
|
|
# Model Card: Redmond-Hermes-Coder 15B |
|
|
|
## Model Description |
|
|
|
Redmond-Hermes-Coder 15B is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Karan4D leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. |
|
|
|
This model was trained with a WizardCoder base, which itself uses a StarCoder base model. |
|
|
|
The model is truly great at code, but, it does come with a tradeoff though. While far better at code than the original Nous-Hermes built on Llama, it is worse than WizardCoder at pure code benchmarks, like HumanEval. |
|
|
|
It comes in at 39% on HumanEval, with WizardCoder at 57%. This is somewhat dissapointing to us, and we are exploring why now. |
|
|
|
However, it does seem better at non-code than WizardCoder on a variety of things, including writing tasks. |
|
|
|
## Model Training |
|
|
|
The model was trained almost entirely on synthetic GPT-4 outputs. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), CodeAlpaca, Evol_Instruct Uncensored, GPT4-LLM, and Unnatural Instructions. |
|
|
|
Additional data inputs came from Camel-AI's Biology/Physics/Chemistry and Math Datasets, Airoboros' (v1) GPT-4 Dataset, and more from CodeAlpaca. The total volume of data encompassed over 300,000 instructions. |
|
|
|
## Collaborators |
|
The model fine-tuning and the datasets were a collaboration of efforts and resources from members of Nous Research, includingTeknium, Karan4D, Huemin Art, and Redmond AI's generous compute grants. |
|
|
|
Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly. |
|
|
|
Among the contributors of datasets, GPTeacher was made available by Teknium, Wizard LM by nlpxucan, and the Nous Research Instruct Dataset was provided by Karan4D and HueminArt. |
|
The GPT4-LLM and Unnatural Instructions were provided by Microsoft, Airoboros dataset by jondurbin, Camel-AI datasets are from Camel-AI, and CodeAlpaca dataset by Sahil 2801. |
|
If anyone was left out, please open a thread in the community tab. |
|
|
|
## Prompt Format |
|
|
|
The model follows the Alpaca prompt format: |
|
``` |
|
### Instruction: |
|
|
|
### Response: |
|
``` |
|
|
|
or |
|
|
|
``` |
|
### Instruction: |
|
|
|
### Input: |
|
|
|
### Response: |
|
``` |
|
|
|
## Resources for Applied Use Cases: |
|
For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord |
|
For an example of a roleplaying discord bot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot |
|
|
|
## Future Plans |
|
The model is currently being uploaded in FP16 format, and there are plans to convert the model to GGML and GPTQ 4bit quantizations. The team is also working on a full benchmark, similar to what was done for GPT4-x-Vicuna. We will try to get in discussions to get the model included in the GPT4All. |
|
|
|
## Benchmark Results |
|
``` |
|
HumanEval: 39% |
|
``` |
|
|
|
## Model Usage |
|
The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions. |
|
|
|
Compute provided by our project sponsor Redmond AI, thank you!! |