Disclaimer
THIS PROJECT IS STILL IN WIP
Phi-2-audio-super
Base Model: microsoft/phi-2
Fine-tuned version of abacaj/phi-2-super for ASR on librispeech_asr.
How to run inference for text only:
import transformers
import torch
if __name__ == "__main__":
model_name = "Thytu/phi-2-audio-super"
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
model = (
transformers.AutoModelForCausalLM.from_pretrained(
model_name,
)
.to("cuda:0")
.eval()
)
# Exactly like for phi-2-super :D
messages = [
{"role": "user", "content": "Hello, who are you?"}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
input_ids_cutoff = inputs.size(dim=1)
with torch.no_grad():
generated_ids = model.generate(
input_ids=inputs,
use_cache=True,
max_new_tokens=512,
temperature=0.2,
top_p=0.95,
do_sample=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
)
completion = tokenizer.decode(
generated_ids[0][input_ids_cutoff:],
skip_special_tokens=True,
)
print(completion)
How to run inference for ASR:
TODO
- Downloads last month
- 453
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.