Edit model card

Model Overview

Mistral is a set of large language models published by the Mistral AI team. Both pretrained and instruction tuned models are available with 7 billion parameters. See the model card below for benchmarks, data sources, and intended use cases.

Both weights and Keras model code is released under the Apache 2 License.

Links

Installation

Keras and KerasHub can be installed with:

pip install -U -q keras-hub
pip install -U -q keras>=3

Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the Keras Getting Started page.

Presets

The following model checkpoints are provided by the Keras team. Full code examples for each are available below.

Preset name Parameters Description
mistral_7b_en 7.24B 7B base model
mistral_instruct_7b_en 7.24B 7B instruction-tuned model
mistral_0.2_instruct_7b_en 7.24B 7B instruction-tuned model version 0.2

Prompts

Mistral "instruct" models are instruction tuned on turn by turn conversations and should be prompted with examples that precisely match the training data. Specifically, you must alternate user and assistant turns that begin and end with special tokens. See the following for an example:

prompt = """[INST] Hello! [/INST] Hello! How are you? [INST] I'm great. Could you help me with a task? [/INST]
"""

Base models (without instruct in the name) have no specific prompting structure, and should usually be fine-tuned for a specific task.

Example Usage

import keras
import keras_hub
import numpy as np

Use generate() to do text generation.

mistral_lm = keras_hub.models.MistralCausalLM.from_preset("mistral_7b_en")
mistral_lm.generate("[INST] What is Keras? [/INST]", max_length=500)

# Generate with batched prompts.
mistral_lm.generate(["[INST] What is Keras? [/INST]", "[INST] Give me your best brownie recipe. [/INST]"], max_length=500)

Compile the generate() function with a custom sampler.

mistral_lm = keras_hub.models.MistralCausalLM.from_preset("mistral_7b_en")
mistral_lm.compile(sampler="greedy")
mistral_lm.generate("I want to say", max_length=30)

mistral_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
mistral_lm.generate("I want to say", max_length=30)

Use generate() without preprocessing.

prompt = {
    # `1` maps to the start token followed by "I want to say".
    "token_ids": np.array([[1, 315, 947, 298, 1315, 0, 0, 0, 0, 0]] * 2),
    # Use `"padding_mask"` to indicate values that should not be overridden.
    "padding_mask": np.array([[1, 1, 1, 1, 1, 0, 0, 0, 0, 0]] * 2),
}

mistral_lm = keras_hub.models.MistralCausalLM.from_preset(
    "mistral_7b_en",
    preprocessor=None,
    dtype="bfloat16"
)
mistral_lm.generate(prompt)

Call fit() on a single batch.

features = ["The quick brown fox jumped.", "I forgot my homework."]
mistral_lm = keras_hub.models.MistralCausalLM.from_preset("mistral_7b_en")
mistral_lm.fit(x=features, batch_size=2)

Call fit() without preprocessing.

x = {
    "token_ids": np.array([[1, 315, 947, 298, 1315, 369, 315, 837, 0, 0]] * 2),
    "padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
y = np.array([[315, 947, 298, 1315, 369, 315, 837, 0, 0, 0]] * 2)
sw = np.array([[1, 1, 1, 1, 1, 1, 1, 0, 0, 0]] * 2)

mistral_lm = keras_hub.models.MistralCausalLM.from_preset(
    "mistral_7b_en",
    preprocessor=None,
    dtype="bfloat16"
)
mistral_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)

Example Usage with Hugging Face URI

import keras
import keras_hub
import numpy as np

Use generate() to do text generation.

mistral_lm = keras_hub.models.MistralCausalLM.from_preset("hf://keras/mistral_7b_en")
mistral_lm.generate("[INST] What is Keras? [/INST]", max_length=500)

# Generate with batched prompts.
mistral_lm.generate(["[INST] What is Keras? [/INST]", "[INST] Give me your best brownie recipe. [/INST]"], max_length=500)

Compile the generate() function with a custom sampler.

mistral_lm = keras_hub.models.MistralCausalLM.from_preset("hf://keras/mistral_7b_en")
mistral_lm.compile(sampler="greedy")
mistral_lm.generate("I want to say", max_length=30)

mistral_lm.compile(sampler=keras_hub.samplers.BeamSampler(num_beams=2))
mistral_lm.generate("I want to say", max_length=30)

Use generate() without preprocessing.

prompt = {
    # `1` maps to the start token followed by "I want to say".
    "token_ids": np.array([[1, 315, 947, 298, 1315, 0, 0, 0, 0, 0]] * 2),
    # Use `"padding_mask"` to indicate values that should not be overridden.
    "padding_mask": np.array([[1, 1, 1, 1, 1, 0, 0, 0, 0, 0]] * 2),
}

mistral_lm = keras_hub.models.MistralCausalLM.from_preset(
    "hf://keras/mistral_7b_en",
    preprocessor=None,
    dtype="bfloat16"
)
mistral_lm.generate(prompt)

Call fit() on a single batch.

features = ["The quick brown fox jumped.", "I forgot my homework."]
mistral_lm = keras_hub.models.MistralCausalLM.from_preset("hf://keras/mistral_7b_en")
mistral_lm.fit(x=features, batch_size=2)

Call fit() without preprocessing.

x = {
    "token_ids": np.array([[1, 315, 947, 298, 1315, 369, 315, 837, 0, 0]] * 2),
    "padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
y = np.array([[315, 947, 298, 1315, 369, 315, 837, 0, 0, 0]] * 2)
sw = np.array([[1, 1, 1, 1, 1, 1, 1, 0, 0, 0]] * 2)

mistral_lm = keras_hub.models.MistralCausalLM.from_preset(
    "hf://keras/mistral_7b_en",
    preprocessor=None,
    dtype="bfloat16"
)
mistral_lm.fit(x=x, y=y, sample_weight=sw, batch_size=2)
Downloads last month
12
Inference Examples
Inference API (serverless) does not yet support keras-hub models for this pipeline type.

Collection including keras/mistral_7b_en