it looks it do not work as expected , see below
this is what I receive on a simple q
This is the code I used
import torch
import os
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
Set environment variable for CUDA operations
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
Directory and model identification
model_dir = r'g:\Projects\localLLM\input\model\google\gemma-2-9b-it'
Ensure the model and tokenizer are loaded onto the appropriate device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")
Prepare device mapping for model components
device_map = "auto" if device.type == 'cuda' else {0: "cpu"}
Load model and tokenizer with explicit device mapping
model = AutoModelForCausalLM.from_pretrained(
model_dir,
device_map=device_map,
torch_dtype=torch.float16 if device.type == 'cuda' else torch.float32,
trust_remote_code=True,
local_files_only=True
)
tokenizer = AutoTokenizer.from_pretrained(
model_dir,
local_files_only=True
)
Define the pipeline with model and tokenizer
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device=device.index if device.type == 'cuda' else -1
)
Continuous interaction loop
while True:
user_question = input("Please enter your question: ")
if user_question.lower() in ["quit", "exit", "stop"]:
print("Exiting the session.")
break
# Arguments for text generation
generation_args = {
"max_new_tokens": 256,
"return_full_text": False,
"temperature": 0.2,
"do_sample": True,
}
# Generate text based on user input
output = pipe(user_question, **generation_args)
print("Response:", output[0]['generated_text'])
Please also make sure to use the latest transformers
version (v4.42.3), thanks 🤗
Ok I see, let's try with this change then if you can @Sakura77 :
model = AutoModelForCausalLM.from_pretrained(
model_dir,
device_map=device_map,
+ torch_dtype=torch.bfloat16,
+ attn_implementation='eager',
local_files_only=True
)
The two important parts here are torch_dtype=torch.bfloat16
, as that's what the model was trained with, and attn_implementation='eager'
as eager attention is really important for the gemma model.
I created again a new env , same answers, even with this changes
Load model and tokenizer with explicit device mapping
model = AutoModelForCausalLM.from_pretrained(
model_dir,
device_map=device_map,
torch_dtype=torch.bfloat16,
attn_implementation='eager',
local_files_only=True
)
Do you mind trying with the code snippet in this response, but with the 9b-it model you're using here?
https://huggingface.co/google/gemma-2-27b-it/discussions/14#668280486076c1a904c790e6
Hi
@Sakura77
! In addition to the other recommendations in this thread, could you try to add add_special_tokens: True
to your generation_args
?
generation_args = {
"max_new_tokens": 256,
"return_full_text": False,
+ "add_special_tokens": True,
"temperature": 0.2,
"do_sample": True,
}
Otherwise, the input to the model will be missing an initial <bos>
token, and the model is very sensitive to that.
thank you for your time, now works perfect see bellow
This is the code I used :)
import torch
import os
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
Set environment variable for CUDA operations
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
Directory and model identification
model_dir = r'g:\Projects\localLLM\input\model\google\gemma-2-9b-it'
Ensure the model and tokenizer are loaded onto the appropriate device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")
Prepare device mapping for model components
device_map = "auto" if device.type == 'cuda' else {0: "cpu"}
Load model and tokenizer with explicit device mapping
model = AutoModelForCausalLM.from_pretrained(
model_dir,
device_map=device_map,
torch_dtype=torch.bfloat16,
attn_implementation='eager',
local_files_only=True
)
tokenizer = AutoTokenizer.from_pretrained(
model_dir,
local_files_only=True
)
Define the pipeline with model and tokenizer
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device=device.index if device.type == 'cuda' else -1
)
Continuous interaction loop
while True:
user_question = input("Please enter your question: ")
if user_question.lower() in ["quit", "exit", "stop"]:
print("Exiting the session.")
break
# Arguments for text generation
generation_args = {
"max_new_tokens": 256,
"return_full_text": False,
"add_special_tokens": True,
"temperature": 0.2,
"do_sample": True,
}
# Generate text based on user input
output = pipe(user_question, **generation_args)
print("Response:", output[0]['generated_text'])
Hi @Sakura77 ! In addition to the other recommendations in this thread, could you try to add
add_special_tokens: True
to yourgeneration_args
?
generation_args = { "max_new_tokens": 256, "return_full_text": False, + "add_special_tokens": True, "temperature": 0.2, "do_sample": True, }
Otherwise, the input to the model will be missing an initial
<bos>
token, and the model is very sensitive to that.
But both the 'eager' and 'add_special_tokens' is not specified in the model card right? Is it possible to add an official instruction on how to do inference with gemma 2 correctly? Thanks!