language:
- en
pipeline_tag: text-generation
tags:
- enigma
- valiant
- valiant-labs
- llama
- llama-3.1
- llama-3.1-instruct
- llama-3.1-instruct-8b
- llama-3
- llama-3-instruct
- llama-3-instruct-8b
- 8b
- code
- code-instruct
- python
- conversational
- chat
- instruct
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
datasets:
- sequelbox/Tachibana
- sequelbox/Supernova
model_type: llama
license: llama3.1
Description
This repo contains GGUF format model files for Llama3.1-8B-Enigma.
Files Provided
Name | Quant | Bits | File Size | Remark |
---|---|---|---|---|
llama3.1-8b-enigma.Q2_K.gguf | Q2_K | 2 | 3.18 GB | 2.96G, +3.5199 ppl @ Llama-3-8B |
llama3.1-8b-enigma.Q3_K.gguf | Q3_K | 3 | 4.02 GB | 3.74G, +0.6569 ppl @ Llama-3-8B |
llama3.1-8b-enigma.Q4_0.gguf | Q4_0 | 4 | 4.66 GB | 4.34G, +0.4685 ppl @ Llama-3-8B |
llama3.1-8b-enigma.Q4_K.gguf | Q4_K | 4 | 4.92 GB | 4.58G, +0.1754 ppl @ Llama-3-8B |
llama3.1-8b-enigma.Q5_K.gguf | Q5_K | 5 | 5.73 GB | 5.33G, +0.0569 ppl @ Llama-3-8B |
llama3.1-8b-enigma.Q6_K.gguf | Q6_K | 6 | 6.60 GB | 6.14G, +0.0217 ppl @ Llama-3-8B |
llama3.1-8b-enigma.Q8_0.gguf | Q8_0 | 8 | 8.54 GB | 7.96G, +0.0026 ppl @ Llama-3-8B |
Parameters
path | type | architecture | rope_theta | sliding_win | max_pos_embed |
---|---|---|---|---|---|
ValiantLabs/Llama3.1-8B-Enigma | llama | LlamaForCausalLM | 500000.0 | null | 131072 |
Original Model Card
Enigma is a code-instruct model built on Llama 3.1 8b.
- High quality code instruct performance within the Llama 3 Instruct chat format
- Finetuned on synthetic code-instruct data generated with Llama 3.1 405b. Find the current version of the dataset here!
- Overall chat performance supplemented with generalist synthetic data.
Version
This is the 2024-09-04 release of Enigma for Llama 3.1 8b, enhancing code-instruct and general chat capabilities.
Help us and recommend Enigma to your friends! We're excited for more Enigma releases in the future.
Right now, we're working on more new Build Tools to come very soon, built on Llama 3.1 :)
Prompting Guide
Enigma uses the Llama 3.1 Instruct prompt format. The example script below can be used as a starting point for general chat:
import transformers
import torch
model_id = "ValiantLabs/Llama3.1-8B-Enigma"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are Enigma, a highly capable code assistant."},
{"role": "user", "content": "Can you explain virtualization to me?"}
]
outputs = pipeline(
messages,
max_new_tokens=1024,
)
print(outputs[0]["generated_text"][-1])
The Model
Enigma is built on top of Llama 3.1 8b Instruct, using high quality code-instruct data and general chat data in Llama 3.1 Instruct prompt style to supplement overall performance.
Our current version of Enigma is trained on code-instruct data from sequelbox/Tachibana and general chat data from sequelbox/Supernova.
Enigma is created by Valiant Labs.
Check out our HuggingFace page for Shining Valiant 2 and our other Build Tools models for creators!
Follow us on X for updates on our models!
We care about open source. For everyone to use.
We encourage others to finetune further from our models.