File size: 1,311 Bytes
cacffe3
 
81cbb19
 
 
e593de2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: apache-2.0
datasets:
- databricks/databricks-dolly-15k
pipeline_tag: text-generation
---
# Instruct_Mixtral-8x7B-v0.1_Dolly15K
Fine-tuned from Mixtral-8x7B-v0.1, used Dolly15k for the dataset. 85% for training, 14.9% validation, 0.1% test.  Trained for 1.0 epochs using QLora.  Trained with 1024 context window.

# Model Details
* **Trained by**: trained by [Brillibits](https://www.youtube.com/@Brillibits).
* **Model type:**  **Instruct_Mixtral-8x7B-v0.1_Dolly15K** is an auto-regressive language model based on the Llama 2 transformer architecture.
* **Language(s)**: English
* **License for Instruct_Mixtral-8x7B-v0.1_Dolly15K**: apache-2.0 license


# Prompting

## Prompt Template With Context

```
Write a 10-line poem about a given topic

Input:

The topic is about racecars

Output:
```
## Prompt Template Without Context
```
Who was the was the second president of the United States?

Output:
```

## Professional Assistance
This model and other models like it are great, but where LLMs hold the most promise is when they are applied on custom data to automate a wide variety of tasks

If you have a dataset and want to see if you might be able to apply that data to automate some tasks, and you are looking for professional assistance, contact me [here](mailto:[email protected])