Edit model card

This model uses the Llama-3 model ("meta-llama/Meta-Llama-3-8B") fine-tuned with 4 bit quantization Parameter Efficient Fine Tuning - PEFT training, using LoRA and QLoRA adaptations for the task of Humor Recognition in Greek language.

Model Details

The model was pre-trained on Greek Humorous Dataset

PEFT Configs

  • Bits and bytes config for quantization - QLoRA
  • LoRA config for LoRA adaptation

Pre-processing details

The text needs to be pre-processed by:

  • removing all greek diacritics and punctuations
  • converting all letters to lowercase

Load Pretrained Model

pad_token needs to be handle since Llama-3 pre-training doesn't have eos_token

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("kallantis/Humor-Recognition-Greek-Llama-3", add_prefix_space=True)

tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer.pad_token = tokenizer.eos_token

model = AutoModelForSequenceClassification.from_pretrained(
    "kallantis/Humor-Recognition-Greek-Llama-3",
    quantization_config=quantization_config,
    num_labels=2
)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Kalloniatis/Humor-Recognition-Greek-Llama-3