--- language: - en - de - fr - it - pt - hi - es - th library_name: transformers license: llama3.1 pipeline_tag: text-generation tags: - llama-3.1 - conversational - instruction following - reasoning - function calling - mergekit - finetuning - axolotl - mlx --- # voxmenthe/Llama-3.1-Storm-8B The Model [voxmenthe/Llama-3.1-Storm-8B](https://huggingface.co/voxmenthe/Llama-3.1-Storm-8B) was converted to MLX format from [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) using mlx-lm version **0.17.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("voxmenthe/Llama-3.1-Storm-8B") response = generate(model, tokenizer, prompt="hello", verbose=True) ```