Edit model card

unsloth/Meta-Llama-3.1-8B-bnb-4bit fine tuning after Continued Pretraining

(TREX-Lab at Seoul Cyber University)

Summary

  • Base Model : unsloth/Meta-Llama-3.1-8B-bnb-4bit
  • Dataset : wikimedia/wikipedia(Continued Pretraining), FreedomIntelligence/alpaca-gpt4-korean(FineTuning)
  • This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
  • Test whether fine tuning of a large language model is possible on A30 GPU*1 (successful)
  • Developed by: [TREX-Lab at Seoul Cyber University]
  • Language(s) (NLP): [Korean]
  • Finetuned from model : [unsloth/Meta-Llama-3.1-8B-bnb-4bit]

Continued Pretraining

  warmup_steps = 10
  learning_rate = 5e-5
  embedding_learning_rate = 1e-5
  bf16 = True
  optim = "adamw_8bit"
  weight_decay = 0.01
  lr_scheduler_type = "linear"
  loss : 1.171600

Fine Tuning Detail

  warmup_steps = 10
  learning_rate = 5e-5
  embedding_learning_rate = 1e-5
  bf16 = True
  optim = "adamw_8bit"
  weight_decay = 0.001
  lr_scheduler_type = "linear"
  loss : 0.699600

Usage #1

  # Prompt
  model_prompt = """λ‹€μŒμ€ μž‘μ—…μ„ μ„€λͺ…ν•˜λŠ” λͺ…λ Ήμž…λ‹ˆλ‹€. μš”μ²­μ„ μ μ ˆν•˜κ²Œ μ™„λ£Œν•˜λŠ” 응닡을 μž‘μ„±ν•˜μ„Έμš”.
  
  ### 지침:
  {}
  
  ### 응닡:
  {}"""
  
  FastLanguageModel.for_inference(model)
  inputs = tokenizer(
  [
      model_prompt.format(
          "μ΄μˆœμ‹  μž₯ꡰ은 λˆ„κ΅¬μΈκ°€μš” ? μžμ„Έν•˜κ²Œ μ•Œλ €μ£Όμ„Έμš”.",
          "",
      )
  ], return_tensors = "pt").to("cuda")
  
  outputs = model.generate(**inputs, max_new_tokens = 128, use_cache = True)
  tokenizer.batch_decode(outputs)

Usage #2

  from transformers import TextStreamer

  # Prompt
  model_prompt = """λ‹€μŒμ€ μž‘μ—…μ„ μ„€λͺ…ν•˜λŠ” λͺ…λ Ήμž…λ‹ˆλ‹€. μš”μ²­μ„ μ μ ˆν•˜κ²Œ μ™„λ£Œν•˜λŠ” 응닡을 μž‘μ„±ν•˜μ„Έμš”.
  
  ### 지침:
  {}
  
  ### 응닡:
  {}"""
  
  FastLanguageModel.for_inference(model)
  inputs = tokenizer(
  [
      model_prompt.format(
          "지ꡬλ₯Ό κ΄‘λ²”μœ„ν•˜κ²Œ μ„€λͺ…ν•˜μ„Έμš”.",
          "",
      )
  ], return_tensors = "pt").to("cuda")
  
  text_streamer = TextStreamer(tokenizer)
  value = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128, repetition_penalty = 0.1)
Downloads last month
18
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for LEESM/llama-3-8b-bnb-4b-kowiki231101

Finetuned
(491)
this model

Datasets used to train LEESM/llama-3-8b-bnb-4b-kowiki231101