metadata
library_name: keras-nlp
license: gemma
license_link: https://ai.google.dev/gemma/terms
pipeline_tag: text-generation
extra_gated_heading: Access Gemma 1.1 on Hugging Face
extra_gated_prompt: >-
To access Gemma 1.1 on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged-in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
Gemma 1.1
Google Model Page: Gemma
This model card corresponds to the latest 7B instruct version of the Gemma 1.1 model in Keras.
Keras models can be used with JAX, PyTorch or TensorFlow as numerical backends. JAX, with its support for SPMD model paralellism, is recommended for large models. For more information: distributed training with Keras and JAX.
You can find other models in the Gemma family here:
Base | Instruct | |
---|---|---|
2B | gemma-2b-keras | gemma-1.1-2b-it-keras |
7B | gemma-7b-keras | gemma-1.1-7b-it-keras |
For more information about the model, visit https://huggingface.co/google/gemma-1.1-7b-it.
Model Page : Gemma
Resources and Technical Documentation : Technical Report : Responsible Generative AI Toolkit
Terms of Use : Terms
Authors : Google
Loading the model
import keras_nlp
gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset("hf://google/gemma-1.1-7b-it-keras")