Edit model card

This repo contains MedLLaMA_13B, which is LLaMA-13b finetuned with some Medical Corpus.

The model was trained with the following hyperparameters:

  • Epochs: 5
  • Batch size: 320
  • Cutoff length: 2048
  • Learning rate: 2e-5

The model can be loaded as follows:

import transformers
import torch
tokenizer = transformers.LlamaTokenizer.from_pretrained('chaoyi-wu/MedLLaMA_13B')
model = transformers.LlamaForCausalLM.from_pretrained('chaoyi-wu/MedLLaMA_13B')
sentence = 'Hello, doctor' 
batch = tokenizer(
            sentence,
            return_tensors="pt", 
            add_special_tokens=False
        )
with torch.no_grad():
    generated = model.generate(inputs = batch["input_ids"], max_length=200, do_sample=True, top_k=50)
    print('model predict: ',tokenizer.decode(generated[0]))
Downloads last month
1,746
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using chaoyi-wu/MedLLaMA_13B 25