{ "cells": [ { "cell_type": "markdown", "source": [ "To run this, press \"*Runtime*\" and press \"*Run all*\" on a **free** Tesla T4 Google Colab instance!\n", "
\n", "\n", "To install Unsloth on your own computer, follow the installation instructions on our Github page [here](https://github.com/unslothai/unsloth#installation-instructions---conda).\n", "\n", "You will learn how to do [data prep](#Data), how to [train](#Train), how to [run the model](#Inference), & [how to save it](#Save) (eg for Llama.cpp)." ], "metadata": { "id": "IqM-T1RTzY6C" } }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2eSvM9zX_2d3" }, "outputs": [], "source": [ "%%capture\n", "# Installs Unsloth, Xformers (Flash Attention) and all other packages!\n", "!pip install unsloth\n", "# Get latest Unsloth\n", "!pip uninstall unsloth -y && pip install --upgrade --no-cache-dir \"unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git\"" ] }, { "cell_type": "markdown", "source": [ "* We support Llama, Mistral, CodeLlama, TinyLlama, Vicuna, Open Hermes etc\n", "* And Yi, Qwen ([llamafied](https://huggingface.co/models?sort=trending&search=qwen+llama)), Deepseek, all Llama, Mistral derived archs.\n", "* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n", "* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n", "* With [PR 26037](https://github.com/huggingface/transformers/pull/26037), we support downloading 4bit models **4x faster**! [Our repo](https://huggingface.co/unsloth) has Llama, Mistral 4bit models.\n", "* [**NEW**] We make Gemma 6 trillion tokens **2.5x faster**! See our [Gemma notebook](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing)" ], "metadata": { "id": "r2v_X2fA0Df5" } }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 403, "referenced_widgets": [ "4b6316870ae142ba9abf2b399edaaeea", "c3315eb9e7ef42c298b7ba5f7636c373", "b553db45b4204434b015593b84861d9d", "ee88fbb010e349d4a840275150cc59eb", "5d9ac0fdb1c24c59a89390b3fbb6481e", "ee2d57cc909b47e7a96a6348c3bc64c0", "e8f5782342af48549b22edf99517e71d", "b520ce59a04444f6b5c5c9351c02877d", "52a94645e9ca44768f44d46e5a4f2f87", "f800e837619e4ebbb860a5648a21a078", "be41d9ff020b4a4a8f7ec9a5c5aba8f8", "78d1378806944b54b6dc7d9088d0540a", "16486c7ebb0c40cbb05b0f1a146004bf", "3dc95ac19cb948b78179a8617917ec44", "aacd5f3b4d6c4730b13c4e75b461bcca", "9c439e6ecc57445092c0ea43d6c46640", "07cde8ce0ca54e3a95a183541d26bc2e", "2f6b00e4bbfc48f5881a8ae386fa8a3a", "e15aa09fe7d74cb7b348bfe142a8b172", "63f47e74c4b644f1b8e35f392a6a626e", "1f5d1e8ff5ea47e4aa04c12bdf2f8df7", "b25110a1ed2b4fe8ae7fc5cdd724362b", "661c67646b5948449c2d71f5814776b4", "2ea177d7a1904285a4f641c7581736b9", "8234c35bfe33441a8fdcf48dccc0563b", "50d47462e5ea45a7a3e75b88fa3e272e", "75c8a575acd94d0ab2ee673af56f56c1", "f07da0c5e3da4213ade400cb3a6db827", "6ef59867eace4a8aba3bc6652b579b7e", "6d603e8d4b504239adf1820f6c1b608e", "cab2232116d647ef94aa89ffa0515774", "077d992a3da3424194ee45301e4c02ba", "0c5fe7ba93aa496a9eea2bdb8d692fb5", "3f40e8b99e1043e8917ff2c3e1e3dd29", "0fabb8d03b0d42178538a829c843dfed", "5c6d56ca98e146d6853289851fe5baf7", "07e004c9526c4481836d68ef53c9b669", "128211077ca940d499d90667ad27dacb", "6e6b1089e6504aa5a2ccc61953b5e433", "0ef52843237c48dd9566c4dff3e7b42c", "c7fd527ba5e74ddfb3a1376bb219b6c1", "c4a594e3a66d49989c914bbc93b679c1", "c87c9608e866410ba46d49b44d0867ca", "e5a03a389d9a4f68bbc7c7939ecf7206", "ced6f4b20c304ab6b19858766505a596", "62ddc3bbcdcc4cb2993828d860ffaacd", "11d662dad31041baba79243d3ed687c0", "4f1a242530a3417c8309a7eb824f0c4a", "10ba528042394217b6cf5d3bd6288db0", "f5a09713f5d2419cac940042d7900921", "eb11542214a344c7827fa28472f4b5ef", "cd0f5f8d4f3345e09bb76f11ad197934", "ccfcff888b8941a9987071ea8c284d17", "c947d50a70694e0f8cbe5351b8219938", "e32de93685f249828a0a73a52cf1438b", "3f03fca984754fd7897ac389e661511e", "8907ec143bb8415fb045fa3480a0c024", "7347484610f9459ba1b11f32825b9a6f", "63f7b027083f44139b387ed6595a5f10", "2a5c6adc4491433392b708c854d05786", "b1eb529e794c4afb9591a92f383234c7", "e2906dc03b4640d6aea44b52f6292189", "04bb72bc05384a7fab8384b96a9165b0", "a9c322a9110d44b8b06438c6f6f5acac", "de7ab932d6954f73adef9fffaa6afee0", "a5b22ec5d43345a2b957301fed556b94", "ff1229fe81b2467892fcebb5516c586a", "9bc4151c386e4fe49b2aa882159aff88", "4d2e7f9d8ec34c24a869d75c1e08cf54", "e2b109ab9d664847b1611783ed75b240", "9195aa2c13a247b6836701b7d47f8f6d", "bd68af52e4014418b87adc1338bc7047", "0f5ea613817347839697acfec3aef00b", "8d8d93e079fb43479c6aa736837482c7", "2178818d79ff4963b663e74c04303912", "9653b6a3d7fd4014ad6261b8a91d0225", "4f2138b17ad24fa894566001543e7e55" ] }, "id": "QmUBVEnvCDJv", "outputId": "5eff0d61-05b4-471c-eea2-c2e84a915109" }, "outputs": [ { "output_type": "stream", "name": "stderr", "text": [ "/usr/local/lib/python3.10/dist-packages/unsloth/__init__.py:66: UserWarning: Running `ldconfig /usr/lib64-nvidia` to link CUDA.\n", " warnings.warn(\n" ] }, { "output_type": "display_data", "data": { "text/plain": [ "config.json: 0%| | 0.00/1.05k [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "4b6316870ae142ba9abf2b399edaaeea" } }, "metadata": {} }, { "output_type": "stream", "name": "stdout", "text": [ "==((====))== Unsloth: Fast Mistral patching release 2024.2\n", " \\\\ /| GPU: Tesla T4. Max memory: 14.748 GB. Platform = Linux.\n", "O^O/ \\_/ \\ Pytorch: 2.1.0+cu121. CUDA = 7.5. CUDA Toolkit = 12.1.\n", "\\ / Bfloat16 = FALSE. Xformers = 0.0.22.post7. FA = False.\n", " \"-____-\" Apache 2 free license: http://github.com/unslothai/unsloth\n" ] }, { "output_type": "stream", "name": "stderr", "text": [ "You passed `quantization_config` to `from_pretrained` but the model you're loading already has a `quantization_config` attribute. The `quantization_config` attribute will be overwritten with the one you passed to `from_pretrained`.\n" ] }, { "output_type": "display_data", "data": { "text/plain": [ "model.safetensors: 0%| | 0.00/4.13G [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "78d1378806944b54b6dc7d9088d0540a" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "generation_config.json: 0%| | 0.00/116 [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "661c67646b5948449c2d71f5814776b4" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "tokenizer_config.json: 0%| | 0.00/971 [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "3f40e8b99e1043e8917ff2c3e1e3dd29" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "tokenizer.model: 0%| | 0.00/493k [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "ced6f4b20c304ab6b19858766505a596" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "tokenizer.json: 0%| | 0.00/1.80M [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "3f03fca984754fd7897ac389e661511e" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "special_tokens_map.json: 0%| | 0.00/438 [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "ff1229fe81b2467892fcebb5516c586a" } }, "metadata": {} } ], "source": [ "from unsloth import FastLanguageModel\n", "import torch\n", "max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!\n", "dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+\n", "load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.\n", "\n", "# 4bit pre quantized models we support for 4x faster downloading + no OOMs.\n", "fourbit_models = [\n", " \"unsloth/mistral-7b-bnb-4bit\",\n", " \"unsloth/mistral-7b-instruct-v0.2-bnb-4bit\",\n", " \"unsloth/llama-2-7b-bnb-4bit\",\n", " \"unsloth/llama-2-13b-bnb-4bit\",\n", " \"unsloth/codellama-34b-bnb-4bit\",\n", " \"unsloth/tinyllama-bnb-4bit\",\n", " \"unsloth/gemma-7b-bnb-4bit\", # New Google 6 trillion tokens model 2.5x faster!\n", " \"unsloth/gemma-2b-bnb-4bit\",\n", "] # More models at https://huggingface.co/unsloth\n", "\n", "model, tokenizer = FastLanguageModel.from_pretrained(\n", " model_name = \"unsloth/mistral-7b-bnb-4bit\", # Choose ANY! eg teknium/OpenHermes-2.5-Mistral-7B\n", " max_seq_length = max_seq_length,\n", " dtype = dtype,\n", " load_in_4bit = load_in_4bit,\n", " # token = \"hf_...\", # use one if using gated models like meta-llama/Llama-2-7b-hf\n", ")" ] }, { "cell_type": "markdown", "source": [ "We now add LoRA adapters so we only need to update 1 to 10% of all parameters!" ], "metadata": { "id": "SXd9bTZd1aaL" } }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "6bZsfBuZDeCL", "colab": { "base_uri": "https://localhost:8080/" }, "outputId": "b630cc80-ff95-45a2-cc0d-38666010d73b" }, "outputs": [ { "output_type": "stream", "name": "stderr", "text": [ "Unsloth 2024.2 patched 32 layers with 32 QKV layers, 32 O layers and 32 MLP layers.\n" ] } ], "source": [ "model = FastLanguageModel.get_peft_model(\n", " model,\n", " r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128\n", " target_modules = [\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\",\n", " \"gate_proj\", \"up_proj\", \"down_proj\",],\n", " lora_alpha = 16,\n", " lora_dropout = 0, # Supports any, but = 0 is optimized\n", " bias = \"none\", # Supports any, but = \"none\" is optimized\n", " use_gradient_checkpointing = True,\n", " random_state = 3407,\n", " use_rslora = False, # We support rank stabilized LoRA\n", " loftq_config = None, # And LoftQ\n", ")" ] }, { "cell_type": "markdown", "source": [ "\n", "### Data Prep\n", "We now use the Alpaca dataset from [yahma](https://huggingface.co/datasets/yahma/alpaca-cleaned), which is a filtered version of 52K of the original [Alpaca dataset](https://crfm.stanford.edu/2023/03/13/alpaca.html). You can replace this code section with your own data prep.\n", "\n", "**[NOTE]** To train only on completions (ignoring the user's input) read TRL's docs [here](https://huggingface.co/docs/trl/sft_trainer#train-on-completions-only).\n", "\n", "**[NOTE]** Remember to add the **EOS_TOKEN** to the tokenized output!! Otherwise you'll get infinite generations!\n", "\n", "If you want to use the `ChatML` template for ShareGPT datasets, try our conversational [notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing).\n", "\n", "For text completions like novel writing, try this [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)." ], "metadata": { "id": "vITh0KVJ10qX" } }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "LjY75GoYUCB8", "colab": { "base_uri": "https://localhost:8080/", "height": 145, "referenced_widgets": [ "9ae302db43ac477fbfac4524d9aa3382", "094b6478ec2b402fac8d4fc0c2921687", "64ae5dbec3fa4ea3abbada99ddc5b95f", "d2f03ac06cf24ecbaf9fecebfbc75704", "e00d429535b2418cb3f2caafcb661b2b", "2aaf95a7cdeb424bb369b658b0d91850", "439a8d79e8064125b694e23d8b3cda99", "05c7968b32744985bdd67900cc5cc5d0", "e90def295593458e99241c538186c3b5", "8fb7fa4f8a274d3eb3e58953b1e673b0", "6404a027f5b0439786996b45695ea17c", "e7163211ff064cf68f2a9a92a1a44b14", "75da1ca588cd427083ebcc98af74ad45", "a0e64b31b0744d6f9a540d6d71f29898", "0fda3d9d81764040b701f44c022d3a97", "a629a4f39f904d918cfb1940f2756cb1", "25b8bf991b234f30af21842b423e8c7d", "dd1377f789ad4439b6bf99843fe4ca9a", "92a3e0edfd034150b1ef3f07f21957c5", "caf3a07d739144cca5a246be40ed1d2c", "3f4d8437a14d42f1adb2314b8f090f91", "de5acf3da485445d9955603111bc9bb1", "49606713136e4b4f809e88edec0ee31f", "41a9ff9396454511ac9ec7cb8cd73999", "90f3e5054101499a9d623edd73a58aaf", "8d2a2929aed1450f909f67a82efb072c", "00daf2c57f724c5b8581554ca2f2b2c1", "221f374f77914569ad4cf0464e8c183e", "f52c132e71ce4f919abcff5a7df10a52", "45e6fb6a4f144daab3f859cadabff196", "fdcf4a1a700f4688ab19d55af784cf42", "f02cb5a79f404c3db58a86e010d929b5", "64763f108b13475d85f44f4eb1f4a7f7", "add655fe6f304c5b8c7f5c374f73fe84", "76de4b26f31b45f9a8eb31413badd4d0", "4c31d1803e134ed38a67c9cd61097a28", "3edd0e9e876f4429be155e3378922f30", "b1e594f36f3c4751b648abcbb74c28cd", "304eef6608044f45ae0e78c4ce32233a", "fb2e02192e1f4972bc75441d217168e1", "92d2d40d99304e18b11bcb01093ad510", "4dc2fb7510824a78bd12f58fc2594a89", "8b01eafd97dd48cfb86952d0946aa856", "0ba86603e45d473f89ce572c6874f2fe" ] }, "outputId": "9f40f734-788c-4793-c1af-e9d003337612" }, "outputs": [ { "output_type": "display_data", "data": { "text/plain": [ "Downloading readme: 0%| | 0.00/11.6k [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "9ae302db43ac477fbfac4524d9aa3382" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "Downloading data: 0%| | 0.00/44.3M [00:00, ?B/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "e7163211ff064cf68f2a9a92a1a44b14" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "Generating train split: 0 examples [00:00, ? examples/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "49606713136e4b4f809e88edec0ee31f" } }, "metadata": {} }, { "output_type": "display_data", "data": { "text/plain": [ "Map: 0%| | 0/51760 [00:00, ? examples/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "add655fe6f304c5b8c7f5c374f73fe84" } }, "metadata": {} } ], "source": [ "alpaca_prompt = \"\"\"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n", "\n", "### Instruction:\n", "{}\n", "\n", "### Input:\n", "{}\n", "\n", "### Response:\n", "{}\"\"\"\n", "\n", "EOS_TOKEN = tokenizer.eos_token # Must add EOS_TOKEN\n", "def formatting_prompts_func(examples):\n", " instructions = examples[\"instruction\"]\n", " inputs = examples[\"input\"]\n", " outputs = examples[\"output\"]\n", " texts = []\n", " for instruction, input, output in zip(instructions, inputs, outputs):\n", " # Must add EOS_TOKEN, otherwise your generation will go on forever!\n", " text = alpaca_prompt.format(instruction, input, output) + EOS_TOKEN\n", " texts.append(text)\n", " return { \"text\" : texts, }\n", "pass\n", "\n", "from datasets import load_dataset\n", "dataset = load_dataset(\"yahma/alpaca-cleaned\", split = \"train\")\n", "dataset = dataset.map(formatting_prompts_func, batched = True,)" ] }, { "cell_type": "markdown", "source": [ "\n", "### Train the model\n", "Now let's use Huggingface TRL's `SFTTrainer`! More docs here: [TRL SFT docs](https://huggingface.co/docs/trl/sft_trainer). We do 60 steps to speed things up, but you can set `num_train_epochs=1` for a full run, and turn off `max_steps=None`. We also support TRL's `DPOTrainer`!" ], "metadata": { "id": "idAEIeSQ3xdS" } }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "95_Nn-89DhsL", "colab": { "base_uri": "https://localhost:8080/", "height": 49, "referenced_widgets": [ "28408ab4afb7494f8cb2f834458f411c", "b85ef639d1f54fb3ac8902fb66be79eb", "0c266c0637064fbe8d548d0876c6b315", "aa315e0b196d4b80a114690a689d5fbf", "5b6f098adcb84717b162152ddaadfa0b", "caaf2d5a555f41fab3b74f3096375ba9", "1789d926e02549b19e4aca59f44f320a", "3816c6f816784e5ca11a458bbf790031", "954d6540772d40a99f642c2524db3a0c", "fdc5073e7cc24edc8c5b920a363b4a40", "f69d18a0b745434d8d078939e3a0c7fc" ] }, "outputId": "4b809e6d-271f-446f-dec8-abe0d13259f8" }, "outputs": [ { "output_type": "display_data", "data": { "text/plain": [ "Map (num_proc=2): 0%| | 0/51760 [00:00, ? examples/s]" ], "application/vnd.jupyter.widget-view+json": { "version_major": 2, "version_minor": 0, "model_id": "28408ab4afb7494f8cb2f834458f411c" } }, "metadata": {} } ], "source": [ "from trl import SFTTrainer\n", "from transformers import TrainingArguments\n", "\n", "trainer = SFTTrainer(\n", " model = model,\n", " tokenizer = tokenizer,\n", " train_dataset = dataset,\n", " dataset_text_field = \"text\",\n", " max_seq_length = max_seq_length,\n", " dataset_num_proc = 2,\n", " packing = False, # Can make training 5x faster for short sequences.\n", " args = TrainingArguments(\n", " per_device_train_batch_size = 2,\n", " gradient_accumulation_steps = 4,\n", " warmup_steps = 5,\n", " max_steps = 60,\n", " learning_rate = 2e-4,\n", " fp16 = not torch.cuda.is_bf16_supported(),\n", " bf16 = torch.cuda.is_bf16_supported(),\n", " logging_steps = 1,\n", " optim = \"adamw_8bit\",\n", " weight_decay = 0.01,\n", " lr_scheduler_type = \"linear\",\n", " seed = 3407,\n", " output_dir = \"outputs\",\n", " ),\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2ejIt2xSNKKp", "colab": { "base_uri": "https://localhost:8080/" }, "cellView": "form", "outputId": "4815a050-0c0f-4a6a-9d93-b01c44eaea35" }, "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "GPU = Tesla T4. Max memory = 14.748 GB.\n", "4.625 GB of memory reserved.\n" ] } ], "source": [ "#@title Show current memory stats\n", "gpu_stats = torch.cuda.get_device_properties(0)\n", "start_gpu_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)\n", "max_memory = round(gpu_stats.total_memory / 1024 / 1024 / 1024, 3)\n", "print(f\"GPU = {gpu_stats.name}. Max memory = {max_memory} GB.\")\n", "print(f\"{start_gpu_memory} GB of memory reserved.\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "yqxqAZ7KJ4oL", "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "outputId": "3cf26aac-6042-4458-c4a6-d8849efb6a95" }, "outputs": [ { "output_type": "display_data", "data": { "text/plain": [ "Step | \n", "Training Loss | \n", "
---|---|
1 | \n", "1.408900 | \n", "
2 | \n", "1.709700 | \n", "
3 | \n", "1.148400 | \n", "
4 | \n", "1.177400 | \n", "
5 | \n", "0.986600 | \n", "
6 | \n", "0.998200 | \n", "
7 | \n", "1.032900 | \n", "
8 | \n", "0.887500 | \n", "
9 | \n", "0.937400 | \n", "
10 | \n", "0.931900 | \n", "
11 | \n", "0.902100 | \n", "
12 | \n", "0.923200 | \n", "
13 | \n", "0.813900 | \n", "
14 | \n", "0.978600 | \n", "
15 | \n", "0.820900 | \n", "
16 | \n", "0.879200 | \n", "
17 | \n", "1.005600 | \n", "
18 | \n", "0.720100 | \n", "
19 | \n", "0.588800 | \n", "
20 | \n", "0.721400 | \n", "
21 | \n", "0.884100 | \n", "
22 | \n", "0.929400 | \n", "
23 | \n", "0.754300 | \n", "
24 | \n", "0.748600 | \n", "
25 | \n", "0.895600 | \n", "
26 | \n", "0.735300 | \n", "
27 | \n", "0.859500 | \n", "
28 | \n", "0.861600 | \n", "
29 | \n", "0.785600 | \n", "
30 | \n", "0.746100 | \n", "
31 | \n", "0.793800 | \n", "
32 | \n", "0.757500 | \n", "
33 | \n", "0.858700 | \n", "
34 | \n", "0.851300 | \n", "
35 | \n", "0.739500 | \n", "
36 | \n", "0.754100 | \n", "
37 | \n", "0.767000 | \n", "
38 | \n", "0.801800 | \n", "
39 | \n", "0.885900 | \n", "
40 | \n", "0.927400 | \n", "
41 | \n", "0.734400 | \n", "
42 | \n", "0.596500 | \n", "
43 | \n", "0.699700 | \n", "
44 | \n", "0.661200 | \n", "
45 | \n", "0.965600 | \n", "
46 | \n", "0.576600 | \n", "
47 | \n", "0.752800 | \n", "
48 | \n", "0.757400 | \n", "
49 | \n", "0.715200 | \n", "
50 | \n", "0.961300 | \n", "
51 | \n", "0.829200 | \n", "
52 | \n", "0.713100 | \n", "
53 | \n", "0.843700 | \n", "
54 | \n", "0.921400 | \n", "
55 | \n", "0.715100 | \n", "
56 | \n", "0.948600 | \n", "
57 | \n", "0.740700 | \n", "
58 | \n", "0.893900 | \n", "
59 | \n", "0.772700 | \n", "
60 | \n", "0.880000 | \n", "
"
]
},
"metadata": {}
}
],
"source": [
"trainer_stats = trainer.train()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pCqnaKmlO1U9",
"cellView": "form",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "cf63d152-e152-468c-ba0d-938e0d2f71a0"
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"488.38 seconds used for training.\n",
"8.14 minutes used for training.\n",
"Peak reserved memory = 6.846 GB.\n",
"Peak reserved memory for training = 2.221 GB.\n",
"Peak reserved memory % of max memory = 46.42 %.\n",
"Peak reserved memory for training % of max memory = 15.06 %.\n"
]
}
],
"source": [
"#@title Show final memory and time stats\n",
"used_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)\n",
"used_memory_for_lora = round(used_memory - start_gpu_memory, 3)\n",
"used_percentage = round(used_memory /max_memory*100, 3)\n",
"lora_percentage = round(used_memory_for_lora/max_memory*100, 3)\n",
"print(f\"{trainer_stats.metrics['train_runtime']} seconds used for training.\")\n",
"print(f\"{round(trainer_stats.metrics['train_runtime']/60, 2)} minutes used for training.\")\n",
"print(f\"Peak reserved memory = {used_memory} GB.\")\n",
"print(f\"Peak reserved memory for training = {used_memory_for_lora} GB.\")\n",
"print(f\"Peak reserved memory % of max memory = {used_percentage} %.\")\n",
"print(f\"Peak reserved memory for training % of max memory = {lora_percentage} %.\")"
]
},
{
"cell_type": "markdown",
"source": [
"\n",
"### Inference\n",
"Let's run the model! You can change the instruction and input - leave the output blank!"
],
"metadata": {
"id": "ekOmTR1hSNcr"
}
},
{
"cell_type": "code",
"source": [
"# alpaca_prompt = Copied from above\n",
"FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
"inputs = tokenizer(\n",
"[\n",
" alpaca_prompt.format(\n",
" \"Continue the fibonnaci sequence.\", # instruction\n",
" \"1, 1, 2, 3, 5, 8\", # input\n",
" \"\", # output - leave this blank for generation!\n",
" )\n",
"], return_tensors = \"pt\").to(\"cuda\")\n",
"\n",
"outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)\n",
"tokenizer.batch_decode(outputs)"
],
"metadata": {
"id": "kR3gIAX-SM2q",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "5b71f982-38c0-44c8-a4e5-58cd20b5a585"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
"Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"[' Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\\n\\n### Instruction:\\nContinue the fibonnaci sequence.\\n\\n### Input:\\n1, 1, 2, 3, 5, 8\\n\\n### Response:\\nThe next number in the Fibonacci sequence is 13.']"
]
},
"metadata": {},
"execution_count": 9
}
]
},
{
"cell_type": "markdown",
"source": [
" You can also use a `TextStreamer` for continuous inference - so you can see the generation token by token, instead of waiting the whole time!"
],
"metadata": {
"id": "CrSvZObor0lY"
}
},
{
"cell_type": "code",
"source": [
"# alpaca_prompt = Copied from above\n",
"FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
"inputs = tokenizer(\n",
"[\n",
" alpaca_prompt.format(\n",
" \"Continue the fibonnaci sequence.\", # instruction\n",
" \"1, 1, 2, 3, 5, 8\", # input\n",
" \"\", # output - leave this blank for generation!\n",
" )\n",
"], return_tensors = \"pt\").to(\"cuda\")\n",
"\n",
"from transformers import TextStreamer\n",
"text_streamer = TextStreamer(tokenizer)\n",
"_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128)"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "e2pEuRb1r2Vg",
"outputId": "084aab62-2122-436a-c0cb-8871986640eb"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
"Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.\n"
]
},
{
"output_type": "stream",
"name": "stdout",
"text": [
" Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n",
"\n",
"### Instruction:\n",
"Continue the fibonnaci sequence.\n",
"\n",
"### Input:\n",
"1, 1, 2, 3, 5, 8\n",
"\n",
"### Response:\n",
"The next number in the Fibonacci sequence is 13.\n"
]
}
]
},
{
"cell_type": "markdown",
"source": [
"\n",
"### Saving, loading finetuned models\n",
"To save the final model as LoRA adapters, either use Huggingface's `push_to_hub` for an online save or `save_pretrained` for a local save.\n",
"\n",
"**[NOTE]** This ONLY saves the LoRA adapters, and not the full model. To save to 16bit or GGUF, scroll down!"
],
"metadata": {
"id": "uMuVrWbjAzhc"
}
},
{
"cell_type": "code",
"source": [
"model.save_pretrained(\"lora_model\") # Local saving\n",
"# model.push_to_hub(\"your_name/lora_model\", token = \"...\") # Online saving"
],
"metadata": {
"id": "upcOlWe7A1vc"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Now if you want to load the LoRA adapters we just saved for inference, set `False` to `True`:"
],
"metadata": {
"id": "AEEcJ4qfC7Lp"
}
},
{
"cell_type": "code",
"source": [
"if False:\n",
" from unsloth import FastLanguageModel\n",
" model, tokenizer = FastLanguageModel.from_pretrained(\n",
" model_name = \"lora_model\", # YOUR MODEL YOU USED FOR TRAINING\n",
" max_seq_length = max_seq_length,\n",
" dtype = dtype,\n",
" load_in_4bit = load_in_4bit,\n",
" )\n",
" FastLanguageModel.for_inference(model) # Enable native 2x faster inference\n",
"\n",
"# alpaca_prompt = You MUST copy from above!\n",
"\n",
"inputs = tokenizer(\n",
"[\n",
" alpaca_prompt.format(\n",
" \"What is a famous tall tower in Paris?\", # instruction\n",
" \"\", # input\n",
" \"\", # output - leave this blank for generation!\n",
" )\n",
"], return_tensors = \"pt\").to(\"cuda\")\n",
"\n",
"outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)\n",
"tokenizer.batch_decode(outputs)"
],
"metadata": {
"id": "MKX_XKs_BNZR",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "05e5a193-dab0-41db-e07c-4b3afbdd7932"
},
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stderr",
"text": [
"Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.\n"
]
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"[' Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\\n\\n### Instruction:\\nWhat is a famous tall tower in Paris?\\n\\n### Input:\\n\\n\\n### Response:\\nThe Eiffel Tower is a famous tall tower in Paris, France. It is located on the Champ de Mars and is one of the most recognizable structures in the world.']"
]
},
"metadata": {},
"execution_count": 12
}
]
},
{
"cell_type": "markdown",
"source": [
"You can also use Hugging Face's `AutoModelForPeftCausalLM`. Only use this if you do not have `unsloth` installed. It can be hopelessly slow, since `4bit` model downloading is not supported, and Unsloth's **inference is 2x faster**."
],
"metadata": {
"id": "QQMjaNrjsU5_"
}
},
{
"cell_type": "code",
"source": [
"if False:\n",
" # I highly do NOT suggest - use Unsloth if possible\n",
" from peft import AutoPeftModelForCausalLM\n",
" from transformers import AutoTokenizer\n",
" model = AutoPeftModelForCausalLM.from_pretrained(\n",
" \"lora_model\", # YOUR MODEL YOU USED FOR TRAINING\n",
" load_in_4bit = load_in_4bit,\n",
" )\n",
" tokenizer = AutoTokenizer.from_pretrained(\"lora_model\")"
],
"metadata": {
"id": "yFfaXG0WsQuE"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### Saving to float16 for VLLM\n",
"\n",
"We also support saving to `float16` directly. Select `merged_16bit` for float16 or `merged_4bit` for int4. We also allow `lora` adapters as a fallback. Use `push_to_hub_merged` to upload to your Hugging Face account! You can go to https://huggingface.co/settings/tokens for your personal tokens."
],
"metadata": {
"id": "f422JgM9sdVT"
}
},
{
"cell_type": "code",
"source": [
"# Merge to 16bit\n",
"if False: model.save_pretrained_merged(\"model\", tokenizer, save_method = \"merged_16bit\",)\n",
"if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"merged_16bit\", token = \"\")\n",
"\n",
"# Merge to 4bit\n",
"if False: model.save_pretrained_merged(\"model\", tokenizer, save_method = \"merged_4bit\",)\n",
"if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"merged_4bit\", token = \"\")\n",
"\n",
"# Just LoRA adapters\n",
"if False: model.save_pretrained_merged(\"model\", tokenizer, save_method = \"lora\",)\n",
"if False: model.push_to_hub_merged(\"hf/model\", tokenizer, save_method = \"lora\", token = \"\")"
],
"metadata": {
"id": "iHjt_SMYsd3P"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"### GGUF / llama.cpp Conversion\n",
"To save to `GGUF` / `llama.cpp`, we support it natively now! We clone `llama.cpp` and we default save it to `q8_0`. We allow all methods like `q4_k_m`. Use `save_pretrained_gguf` for local saving and `push_to_hub_gguf` for uploading to HF.\n",
"\n",
"Some supported quant methods (full list on our [Wiki page](https://github.com/unslothai/unsloth/wiki#gguf-quantization-options)):\n",
"* `q8_0` - Fast conversion. High resource use, but generally acceptable.\n",
"* `q4_k_m` - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K.\n",
"* `q5_k_m` - Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K."
],
"metadata": {
"id": "TCv4vXHd61i7"
}
},
{
"cell_type": "code",
"source": [
"# Save to 8bit Q8_0\n",
"if False: model.save_pretrained_gguf(\"model\", tokenizer,)\n",
"if False: model.push_to_hub_gguf(\"hf/model\", tokenizer, token = \"\")\n",
"\n",
"# Save to 16bit GGUF\n",
"if False: model.save_pretrained_gguf(\"model\", tokenizer, quantization_method = \"f16\")\n",
"if False: model.push_to_hub_gguf(\"hf/model\", tokenizer, quantization_method = \"f16\", token = \"\")\n",
"\n",
"# Save to q4_k_m GGUF\n",
"if False: model.save_pretrained_gguf(\"model\", tokenizer, quantization_method = \"q4_k_m\")\n",
"if False: model.push_to_hub_gguf(\"hf/model\", tokenizer, quantization_method = \"q4_k_m\", token = \"\")"
],
"metadata": {
"id": "FqfebeAdT073"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"Now, use the `model-unsloth.gguf` file or `model-unsloth-Q4_K_M.gguf` file in `llama.cpp` or a UI based system like `GPT4All`. You can install GPT4All by going [here](https://gpt4all.io/index.html)."
],
"metadata": {
"id": "bDp0zNpwe6U_"
}
},
{
"cell_type": "markdown",
"source": [
"And we're done! If you have any questions on Unsloth, we have a [Discord](https://discord.gg/u54VK8m8tk) channel! If you find any bugs or want to keep updated with the latest LLM stuff, or need help, join projects etc, feel free to join our Discord!\n",
"\n",
"Some other links:\n",
"1. Zephyr DPO 2x faster [free Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing)\n",
"2. Llama 7b 2x faster [free Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing)\n",
"3. TinyLlama 4x faster full Alpaca 52K in 1 hour [free Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing)\n",
"4. CodeLlama 34b 2x faster [A100 on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing)\n",
"5. Mistral 7b [free Kaggle version](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook)\n",
"6. We also did a [blog](https://huggingface.co/blog/unsloth-trl) with 🤗 HuggingFace, and we're in the TRL [docs](https://huggingface.co/docs/trl/main/en/sft_trainer#accelerate-fine-tuning-2x-using-unsloth)!\n",
"7. `ChatML` for ShareGPT datasets, [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing)\n",
"8. Text completions like novel writing [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)\n",
"9. Gemma 6 trillion tokens is 2.5x faster! [free Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing)\n",
"\n",
"