Edit model card

Model Card for kanelsnegl-v0.2

A Danish finetune of Zephyr-7b-alpha 😀 The idea with this model (apart from personal learning) is to have a lightweight model that can perform simple generative tasks in Danish in a consistent way e.g. 0-shot classification, label generation, perhaps even summarization.

Try it here Open In Colab

Kanelsnegl Logo

Model Description

Base model: Zephyr-7b-alpha finetuned on DDSC/partial-danish-gigaword-no-twitter. The training involved a maximum length of 512. QLora completion finetuning of all linear layers was also implemented. This model is mostly fun tinkering for personal learning purpose. This version got 4 times more fine-tuning than v0.1 RJuro/kanelsnegl-v0.1. It produces better Danish and follows complex prompts and instructions.

Usage

An example with bnb quantization that should work on Colab free GPU.

# pip install accelerate bitsandbytes xformers -q

from torch import cuda

model_id = 'RJuro/kanelsnegl-v0.2'
device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'

print(device)

from torch import bfloat16
import transformers

# set quantization configuration to load large model with less GPU memory
# this requires the `bitsandbytes` library

bnb_config = transformers.BitsAndBytesConfig(
    load_in_4bit=True,  # 4-bit quantization
    bnb_4bit_quant_type='nf4',  # Normalized float 4
    bnb_4bit_use_double_quant=True,  # Second quantization after the first
    bnb_4bit_compute_dtype=bfloat16  # Computation type
)

# Mistral/Llama (Zephir) Tokenizer
tokenizer = transformers.AutoTokenizer.from_pretrained(model_id)

# Zephir Model
model = transformers.AutoModelForCausalLM.from_pretrained(
    model_id,
    trust_remote_code=True,
    quantization_config=bnb_config,
    device_map='auto',
)

# Our text generator
generator = transformers.pipeline(
    model=model, tokenizer=tokenizer,
    task='text-generation',
    temperature=0.1,
    max_new_tokens=250,
    repetition_penalty=1.1,
    top_k=50,
    top_p=0.95
)

prompt = """<|system|> Du er en god og sød assistent </s>

<|user|> Forklar mig attention i transformer-modeller som om jeg var 5 år gammel. Max. 3 sættninger.</s>

<|assistant|>
'Attention' i en transformer-model er som et puslespil: Hvert ord er en brik. Computeren bruger 'attention' til at fokusere på vigtige ord for at forstå sætningen, ligesom at vælge de rigtige brikker i et puslespil. </s>

<|user|> Forklar mig Large Language Models som om jeg var 5 år gammel. Max. 3 sættninger.</s>
"""
res = generator(prompt)
print(res[0]["generated_text"])

Returns 🤣😅

     
<|assistant|>
En Large Language Model (LLM) er en computerprogram der kan læse og forstå mange sprog. Det betyder at den kan skrive og tale med dig, hvis du vil det. Den kan også huske ting fra tidligere samtaler eller tekster. LLMs bliver bedre ved at lære mere og mere. De er så store, at de kræver meget computermagasin, men de er stadig ikke perfekte endnu. Men de kommer snart nær! 

(Note: "Large" betyder stort.)
prompt = """<|system|> Du er en god og sød assistent </s> 

<|user|> Kan du opsumere følgende i max. 3 sætninger:

Flere partier forlader forhandlinger om skattelettelser
Både SF og Dansk Folkeparti har vendt regeringen ryggen og kommer ikke til at være med i skatteaftale.
Det bliver ikke SF eller Dansk Folkeparti, der lægger stemmer til regeringens plan om skattelettelser. Når regeringen på et tidspunkt laver en endelig aftale om skattelettelser til millioner af danskere, bliver det uden SF og Dansk Folkeparti. Begge partier har her til aften forladt forhandlingerne med regeringen.
For SF handler det om, at de ti milliarder kroner til skattelettelser bliver brugt forkert, mener formand Pia Olsen Dyhr. "Vi vil ikke bruge ti milliarder kroner af danskernes penge på skattelettelser til de rige. Vi vil gerne bruge dem på velfærd," siger hun. SF har derfor forsøgt at få regeringen til at droppe planerne om at lette topskatten, men uden held, siger Pia Olsen Dyhr.
Dansk Folkepartis finansordfører, Peter Kofod, bekræfter, at de også har forladt forhandlingerne. "Regeringen vil give skattelettelser for ti milliarder til dem, der i forvejen tjener rigtig mange penge. Men de vil ikke rigtig give noget til folkepensionister og førtidspensionister. Den balance synes vi er fuldstændig skæv, så det kan vi ikke være med i," siger han.
Regeringen præsenterede sit forslag til skattelettelser for ti milliarder kroner i november. I forslaget vil regeringen blandt andet lette skatten ved at hæve beskæftigelsesfradraget, hvilket vil give en lettelse i skatten til alle, der er i arbejde. Det giver til gengæld ikke en skattelettelse til eksempelvis pensionister.
Samtidig vil regeringen lette en del af topskatten - men samtidig indføre en toptopskat for personer, der tjener over 2,5 millioner kroner om året. Regeringen har selv lagt vægt på, at det hævede beskæftigelsesfradrag ville belønne buschauffører og kassedamer med skattelettelser. Men beregninger har siden vist, at det er højtlønnede som læger, advokater og ingeniører, der får langt de største skattelettelser.
Af de ti milliarder kroner havde regeringen afsat 500 millioner kroner, som de andre partier kunne forhandle om. De penge bliver det nu ikke SF eller Dansk Folkeparti, der kommer til at fordele. Ifølge nyhedsbureaeuet Ritzau har Enhedslisten allerede forladt forhandlingerne.
</s>

"""
res = generator(prompt)
print(res[0]["generated_text"])

Returns

<|assistant|>
SF og Dansk Folkeparti har forladt forhandlingerne om skattelettelser, da de ikke ønsker at bruge ti milliarder kroner på skattelettelser til de rige. SF vil bruge pengene på velfærd, mens Dansk Folkeparti mener, at den balance er fuldstændig skæv. Regeringen vil lette skatten ved at hæve beskæftigelsesfradraget, men samtidig indføre en toptopskat for personer, der tjener over 2,5 millioner kroner om året. Beregninger har vist, at det er højtlønnede som læger, advokater og ingeniører, der får langt de største skattelettelser. 
Downloads last month
27
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.