|
|
|
--- |
|
|
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
tags: |
|
- cobalt |
|
- valiant |
|
- valiant-labs |
|
- llama |
|
- llama-3.1 |
|
- llama-3.1-instruct |
|
- llama-3.1-instruct-8b |
|
- llama-3 |
|
- llama-3-instruct |
|
- llama-3-instruct-8b |
|
- 8b |
|
- math |
|
- math-instruct |
|
- conversational |
|
- chat |
|
- instruct |
|
model_type: llama |
|
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct |
|
datasets: |
|
- sequelbox/Polytope |
|
- LDJnr/Pure-Dove |
|
license: llama3.1 |
|
|
|
--- |
|
|
|
![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ) |
|
|
|
# QuantFactory/Llama3.1-8B-Cobalt-GGUF |
|
This is quantized version of [ValiantLabs/Llama3.1-8B-Cobalt](https://huggingface.co/ValiantLabs/Llama3.1-8B-Cobalt) created using llama.cpp |
|
|
|
# Original Model Card |
|
|
|
|
|
|
|
Cobalt is a math-instruct model built on Llama 3.1 8b. |
|
- High quality math instruct performance within the Llama 3 Instruct chat format |
|
- Finetuned on synthetic math-instruct data generated with Llama 3.1 405b. [Find the current version of the dataset here!](https://huggingface.co/datasets/sequelbox/Polytope) |
|
|
|
|
|
## Version |
|
|
|
This is the **2024-08-16** release of Cobalt for Llama 3.1 8b. |
|
|
|
Help us and recommend Cobalt to your friends! We're excited for more Cobalt releases in the future. |
|
|
|
Right now, we're working on more new Build Tools to come very soon, built on Llama 3.1 :) |
|
|
|
|
|
## Prompting Guide |
|
Cobalt uses the [Llama 3.1 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) prompt format. The example script below can be used as a starting point for general chat: |
|
|
|
|
|
import transformers |
|
import torch |
|
|
|
model_id = "ValiantLabs/Llama3.1-8B-Cobalt" |
|
|
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model_id, |
|
model_kwargs={"torch_dtype": torch.bfloat16}, |
|
device_map="auto", |
|
) |
|
|
|
messages = [ |
|
{"role": "system", "content": "You are Cobalt, expert math AI."}, |
|
{"role": "user", "content": "I'm buying a $50 shirt and a $80 pair of pants, both currently at a 25% discount. How much will I pay?"} |
|
] |
|
|
|
outputs = pipeline( |
|
messages, |
|
max_new_tokens=1024, |
|
) |
|
|
|
print(outputs[0]["generated_text"][-1]) |
|
|
|
|
|
## The Model |
|
Cobalt is built on top of Llama 3.1 8b Instruct, using math-instruct data to supplement math-instruct performance using Llama 3.1 Instruct prompt style. |
|
|
|
Our current version of the Cobalt math-instruct dataset is [sequelbox/Polytope](https://huggingface.co/datasets/sequelbox/Polytope), supplemented with a small selection of data from [LDJnr/Pure-Dove](https://huggingface.co/datasets/LDJnr/Pure-Dove) for general chat consistency. |
|
|
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/63444f2687964b331809eb55/VCJ8Fmefd8cdVhXSSxJiD.jpeg) |
|
|
|
|
|
Cobalt is created by [Valiant Labs.](http://valiantlabs.ca/) |
|
|
|
[Check out our HuggingFace page for Shining Valiant 2 and our other Build Tools models for creators!](https://huggingface.co/ValiantLabs) |
|
|
|
[Follow us on X for updates on our models!](https://twitter.com/valiant_labs) |
|
|
|
We care about open source. |
|
For everyone to use. |
|
|
|
We encourage others to finetune further from our models. |
|
|