How do you run this?
Hi, how exactly do you run this?
Like this:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("SkunkworksAI/phi-2", trust_remote_code=True, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("SkunkworksAI/phi-2", trust_remote_code=True)
Hi, how to run the model locally?
model = AutoModelForCausalLM.from_pretrained("SkunkworksAI/phi-2", trust_remote_code=True, torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("SkunkworksAI/phi-2", trust_remote_code=True)
will give
OSError: SkunkworksAI/phi-2 does not appear to have a file named config.json.
this is because the model is not on the root of the repo.
can you make another repo and include the model and check if it can work.
Best,
Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="SkunkworksAI/phi-2", trust_remote_code=True)
#phi test
#need to use pip to install einops, torch, transformers .
#Model downloading as I run this, going to assume things are working...
import torch
from transformers import AutoModelForCausalLM, pipeline
model = AutoModelForCausalLM.from_pretrained("SkunkworksAI/phi-2", trust_remote_code=True, torch_dtype=torch.float16)
#tokenizer = AutoTokenizer.from_pretrained("SkunkworksAI/phi-2", trust_remote_code=True)
pipe = pipeline("text-generation", model=model, trust_remote_code=True)
output = pipe("This is a cool example!", do_sample=True, top_p=0.95)
print(output)