|
--- |
|
base_model: gradientai/Llama-3-8B-Instruct-Gradient-1048k |
|
library_name: transformers |
|
tags: |
|
- 4-bit |
|
- AWQ |
|
- text-generation |
|
- autotrain_compatible |
|
- endpoints_compatible |
|
pipeline_tag: text-generation |
|
inference: false |
|
quantized_by: Suparious |
|
--- |
|
# gradientai/Llama-3-8B-Instruct-Gradient-1048k AWQ |
|
|
|
- Model creator: [gradientai](https://huggingface.co/gradientai) |
|
- Original model: [Llama-3-8B-Instruct-Gradient-1048k](https://huggingface.co/gradientai/Llama-3-8B-Instruct-Gradient-1048k) |
|
|
|
<a href="https://www.gradient.ai" target="_blank"><img src="https://cdn-uploads.huggingface.co/production/uploads/655bb613e8a8971e89944f3e/TSa3V8YpoVagnTYgxiLaO.png" width="200"/></a> |
|
|
|
## Model Summary |
|
|
|
Gradient incorporates your data to deploy autonomous assistants that power critical operations across your business. If you're looking to build custom AI models or agents, email us a message [email protected]. |
|
|
|
For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab) |
|
|
|
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data. |
|
|
|
## How to use |
|
|
|
### Install the necessary packages |
|
|
|
```bash |
|
pip install --upgrade autoawq autoawq-kernels |
|
``` |
|
|
|
### Example Python code |
|
|
|
```python |
|
from awq import AutoAWQForCausalLM |
|
from transformers import AutoTokenizer, TextStreamer |
|
|
|
model_path = "solidrust/Llama-3-8B-Instruct-Gradient-1048k-AWQ" |
|
system_message = "You are Llama-3-8B-Instruct-Gradient-1048k, incarnated as a powerful AI. You were created by gradientai." |
|
|
|
# Load model |
|
model = AutoAWQForCausalLM.from_quantized(model_path, |
|
fuse_layers=True) |
|
tokenizer = AutoTokenizer.from_pretrained(model_path, |
|
trust_remote_code=True) |
|
streamer = TextStreamer(tokenizer, |
|
skip_prompt=True, |
|
skip_special_tokens=True) |
|
|
|
# Convert prompt to tokens |
|
prompt_template = """\ |
|
<|im_start|>system |
|
{system_message}<|im_end|> |
|
<|im_start|>user |
|
{prompt}<|im_end|> |
|
<|im_start|>assistant""" |
|
|
|
prompt = "You're standing on the surface of the Earth. "\ |
|
"You walk one mile south, one mile west and one mile north. "\ |
|
"You end up exactly where you started. Where are you?" |
|
|
|
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), |
|
return_tensors='pt').input_ids.cuda() |
|
|
|
# Generate output |
|
generation_output = model.generate(tokens, |
|
streamer=streamer, |
|
max_new_tokens=512) |
|
``` |
|
|
|
### About AWQ |
|
|
|
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. |
|
|
|
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. |
|
|
|
It is supported by: |
|
|
|
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ |
|
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. |
|
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) |
|
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers |
|
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code |
|
|
|
## Citation instructions |
|
|
|
```plaintext |
|
@article{llama3modelcard, |
|
title={Llama 3 Model Card}, |
|
author={AI@Meta}, |
|
year={2024}, |
|
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} |
|
} |
|
``` |
|
|