|
--- |
|
license: llama3 |
|
language: |
|
- en |
|
library_name: transformers |
|
pipeline_tag: text2text-generation |
|
datasets: |
|
- investbrainsorg/BrokenLlama-v1 |
|
--- |
|
π **Introducing BrokenLlama-3-8b: 100% Full Finetuning, No DPO added, Enjoy!** π |
|
|
|
<img src="https://huggingface.co/pankajmathur/broken-meta-llama-3-8B-v0.1-chatml/resolve/main/brokenLlama-3.webp" width="600" /> |
|
|
|
This bad boy is a fully fine-tuned version of the already awesome Meta-Llama-3-8B, but we've cranked it up to 11 by attempting to remove alignment and biases using a super special curated dataset π with 8192 sequence length. |
|
|
|
BrokenLlama-3-8b went through a crazy 48-hour training session on 4xA100 80GB, so you know it's ready to rock your world. πͺ |
|
|
|
With skills that'll blow your mind, BrokenLlama-3-8b can chat, code, and even do some fancy function calls. π€ |
|
|
|
But watch out! This llama is a wild one and will do pretty much anything you ask, even if it's a bit naughty. π Make sure to keep it in check with your own alignment layer before letting it loose in the wild. |
|
|
|
To get started with this incredible model, just use the ChatML prompt template and let the magic happen. It's so easy, even a llama could do it! π¦ |
|
|
|
``` |
|
<|im_start|>system |
|
You are BrokenLlama, a helpful AI assistant.<|im_end|> |
|
<|im_start|>user |
|
{prompt}<|im_end|> |
|
<|im_start|>assistant |
|
``` |
|
|
|
ChatML prompt template is available as a chat template, which means you can format messages using the tokenizer.apply_chat_template() method: |
|
``` |
|
from transformers import AutoConfig, AutoModel, AutoTokenizer |
|
config = AutoConfig.from_pretrained("investbrainsorg/BrokenLlama-3-8b") |
|
model = AutoModel.from_pretrained("investbrainsorg/BrokenLlama-3-8b") |
|
tokenizer = AutoTokenizer.from_pretrained("investbrainsorg/BrokenLlama-3-8b") |
|
|
|
messages = [ |
|
{"role": "system", "content": "You are BrokenLlama, a helpful AI assistant."}, |
|
{"role": "user", "content": "Hello BrokenLlama, what can you do for me?"} |
|
] |
|
|
|
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt") |
|
model.generate(**gen_input) |
|
``` |
|
|
|
BrokenLlama-3-8b is governed by [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](LICENSE) |
|
|
|
**Quants** |
|
|
|
GGUF : Coming Soon |
|
|
|
AWQ: Coming Soon |
|
|
|
|
|
|
|
**Evals** |
|
|
|
In Progress |
|
|
|
|
|
**NOTE** |
|
|
|
As long as you give us proper credit and Attribution, You are allowed to use this model as base model and performed further DPO/PPO tuning on it. Infact we encourage people to do that based upon their use case, since this is just a generalistic full fine tuned version. |