{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "IqM-T1RTzY6C" }, "source": [ "To run this, press \"*Runtime*\" and press \"*Run all*\" on a **free** Tesla T4 Google Colab instance!\n", "
\n", "\n", "To install Unsloth on your own computer, follow the installation instructions on our Github page [here](https://github.com/unslothai/unsloth#installation-instructions---conda).\n", "\n", "You will learn how to do [data prep](#Data), how to [train](#Train), how to [run the model](#Inference), & [how to save it](#Save) (eg for Llama.cpp).\n", "\n", "**[NEW] Llama-3 8b is trained on a crazy 15 trillion tokens! Llama-2 was 2 trillion.**\n", "\n", "Use our [Llama-3 8b Instruct](https://colab.research.google.com/drive/1XamvWYinY6FOSX9GLvnqSjjsNflxdhNc?usp=sharing) notebook for conversational style finetunes." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2eSvM9zX_2d3" }, "outputs": [], "source": [ "%%capture\n", "# Installs Unsloth, Xformers (Flash Attention) and all other packages!\n", "!pip install \"unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git\"\n", "!pip install --no-deps \"xformers<0.0.27\" \"trl<0.9.0\" peft accelerate bitsandbytes" ] }, { "cell_type": "markdown", "metadata": { "id": "r2v_X2fA0Df5" }, "source": [ "* We support Llama, Mistral, Phi-3, Gemma, Yi, DeepSeek, Qwen, TinyLlama, Vicuna, Open Hermes etc\n", "* We support 16bit LoRA or 4bit QLoRA. Both 2x faster.\n", "* `max_seq_length` can be set to anything, since we do automatic RoPE Scaling via [kaiokendev's](https://kaiokendev.github.io/til) method.\n", "* With [PR 26037](https://github.com/huggingface/transformers/pull/26037), we support downloading 4bit models **4x faster**! [Our repo](https://huggingface.co/unsloth) has Llama, Mistral 4bit models.\n", "* [**NEW**] We make Phi-3 Medium / Mini **2x faster**! See our [Phi-3 Medium notebook](https://colab.research.google.com/drive/1hhdhBa1j_hsymiW9m-WzxQtgqTH_NHqi?usp=sharing)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 317, "referenced_widgets": [ "33895f09c4e64e6684406cdcd5f563fd", "0445e5c28e524074963f64a149bf3cfa", "3b46850152f945b4bcee867a9ff14c2a", "073e484f01db4e649b66db2072827f72", "47f09df713a74b0cafbfdc7c2a0e85c7", "6554dd4fae784997ba07a35bc07edee3", "37feb436305b455c86729a84089c6aa5", "1de19f5998674812a7ab5f685a45ade4", "247fff3855a745d4937852de9b6f9843", "4e0ec430c99345f8b60e1659295ea2a2", "3907c9b53e5340ba85d51438d2ae266f", "55d6eda3c12a4449af7763ccb7a49f00", "d3bd08957d1c4879952af4cf693c17b8", "72fb43b649e0439c81b4e53b0823c1eb", "47ab2cae5911474d8aca8c094a898d23", "7f9dddd6dee045b38c05cf939b2789c8", "66be1b3de29b4e909ce747a372c7c008", "ec2ab0f18ec0464c9c7d7ce78472c659", "1d23ff49c2fb4e368928c4deedae1e76", "a97b4621e6f44fddaf3ddafe9f46152f", "542dc10170534e7196810310356e1b20", "80baff9c0fdb4374b37896550cc220dc", "b4fc4ce48f62475a9107f913295ff7cf", "a2436af0db914999b2a8c2c18afbbf73", "3ae8111fe0b74b8ea424cfebc24632b9", "262fcd0f446f4ab69d92905d6e8a2602", "a3d25194f86442eda20e09f401bcd3c1", "fafd843122bb409fbd146c5dbac3cb78", "28baa1c95e954ad6a666d3793c2a882d", "96a4c7f788a549b1b19ea2f13f825ca1", "c8bb106712c84c108cf0749ee63ff75d", "6fb140805e9f4bdf898358644d8d734a", "985c9a53ad8b4fbe9cf0e1d05e9ff56f", "b058085bbeca46caa4896edfe860a623", "7243abd22e1e4642b71d17d14812ca70", "3a0cf2b2f57342808a971d98ab1b7972", "d715ad9cb0584b4686a237af22561b4e", "4443f46dafc84c20a44c7785724af67d", "d4ef604990454fd1923318f7da4ff8b9", "b0c327bc5c6f4eb1a6b802b41d557874", "92e3e3bda7a84b95bce5e0d00cd64beb", "46d99c8975c14aa5ad2f50cf815db37a", "62ca245a446941109da866001624d06b", "4606cd60b1534e42bbaf704623946c36", "0e40ad039ec34cd2aa27d028fc36ee92", "5e3da8a2d6e44bef96e3876fe4823122", "9ba26b96c0074174a03226f756d3632d", "7ff52d7bc0f44bf29b17f96468e91bef", "c4130e0aa98d40a5ad4bc874dc4f98c3", "fe99021bf8fe4b6e8436cc5a89d7bdbf", "7e138dcd88e94d7b83cf7b13c1035e4a", "b0e36ff8a7674b438cbeb246b6a3a9c0", "6bf7f4b5fb0e4d3b8784f24c001d50b1", "40d3a07645024300a7535d1e89aef589", "1c811454dbfc4e6bb1ee2b2803b25103", "16ef819900f445838f6176cf2ed8faa1", "16c27aede36546cb86fd184b7add6fa4", "36762799aea142ddb0b62560eae1313e", "880bd546988b482389f94bb98445a2ec", "4664462e2e0741cba072945c6e7a7db8", "2a6d76700c8347c28d76d0e59375d1f6", "5ba9a1a9db8b4c0bbd314c1eb0d57c38", "87556af1ea8f454db21ce343cdc68bef", "778f09d2dbc94865841b7efb92e7059a", "a6b3ac77f4924620b5943682d9ca6533", "6c19c528f3384c23b7b505b13e96e466" ] }, "id": "QmUBVEnvCDJv", "outputId": "62e39098-c428-4111-8f24-897c2258d3bc" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "33895f09c4e64e6684406cdcd5f563fd", "version_major": 2, "version_minor": 0 }, "text/plain": [ "config.json: 0%| | 0.00/1.20k [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stdout", "output_type": "stream", "text": [ "==((====))== Unsloth: Fast Llama patching release 2024.7\n", " \\\\ /| GPU: Tesla T4. Max memory: 14.748 GB. Platform = Linux.\n", "O^O/ \\_/ \\ Pytorch: 2.3.0+cu121. CUDA = 7.5. CUDA Toolkit = 12.1.\n", "\\ / Bfloat16 = FALSE. FA [Xformers = 0.0.26.post1. FA2 = False]\n", " \"-____-\" Free Apache license: http://github.com/unslothai/unsloth\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "55d6eda3c12a4449af7763ccb7a49f00", "version_major": 2, "version_minor": 0 }, "text/plain": [ "model.safetensors: 0%| | 0.00/5.70G [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "b4fc4ce48f62475a9107f913295ff7cf", "version_major": 2, "version_minor": 0 }, "text/plain": [ "generation_config.json: 0%| | 0.00/172 [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "b058085bbeca46caa4896edfe860a623", "version_major": 2, "version_minor": 0 }, "text/plain": [ "tokenizer_config.json: 0%| | 0.00/50.6k [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "0e40ad039ec34cd2aa27d028fc36ee92", "version_major": 2, "version_minor": 0 }, "text/plain": [ "tokenizer.json: 0%| | 0.00/9.09M [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "16ef819900f445838f6176cf2ed8faa1", "version_major": 2, "version_minor": 0 }, "text/plain": [ "special_tokens_map.json: 0%| | 0.00/464 [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "from unsloth import FastLanguageModel\n", "import torch\n", "max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!\n", "dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+\n", "load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.\n", "\n", "# 4bit pre quantized models we support for 4x faster downloading + no OOMs.\n", "fourbit_models = [\n", " \"unsloth/mistral-7b-v0.3-bnb-4bit\", # New Mistral v3 2x faster!\n", " \"unsloth/mistral-7b-instruct-v0.3-bnb-4bit\",\n", " \"unsloth/llama-3-8b-bnb-4bit\", # Llama-3 15 trillion tokens model 2x faster!\n", " \"unsloth/llama-3-8b-Instruct-bnb-4bit\",\n", " \"unsloth/llama-3-70b-bnb-4bit\",\n", " \"unsloth/Phi-3-mini-4k-instruct\", # Phi-3 2x faster!\n", " \"unsloth/Phi-3-medium-4k-instruct\",\n", " \"unsloth/mistral-7b-bnb-4bit\",\n", " \"unsloth/gemma-7b-bnb-4bit\", # Gemma 2.2x faster!\n", "] # More models at https://huggingface.co/unsloth\n", "\n", "model, tokenizer = FastLanguageModel.from_pretrained(\n", " model_name = \"unsloth/llama-3-8b-bnb-4bit\",\n", " max_seq_length = max_seq_length,\n", " dtype = dtype,\n", " load_in_4bit = load_in_4bit,\n", " # token = \"hf_\", # use one if using gated models like meta-llama/Llama-2-7b-hf\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "SXd9bTZd1aaL" }, "source": [ "We now add LoRA adapters so we only need to update 1 to 10% of all parameters!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "6bZsfBuZDeCL", "outputId": "743944c4-12d2-4e2e-9551-4d0a7ce1b3a9" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Unsloth 2024.7 patched 32 layers with 32 QKV layers, 32 O layers and 32 MLP layers.\n" ] } ], "source": [ "model = FastLanguageModel.get_peft_model(\n", " model,\n", " r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128\n", " target_modules = [\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\",\n", " \"gate_proj\", \"up_proj\", \"down_proj\",],\n", " lora_alpha = 16,\n", " lora_dropout = 0, # Supports any, but = 0 is optimized\n", " bias = \"none\", # Supports any, but = \"none\" is optimized\n", " # [NEW] \"unsloth\" uses 30% less VRAM, fits 2x larger batch sizes!\n", " use_gradient_checkpointing = \"unsloth\", # True or \"unsloth\" for very long context\n", " random_state = 3407,\n", " use_rslora = False, # We support rank stabilized LoRA\n", " loftq_config = None, # And LoftQ\n", ")" ] }, { "cell_type": "markdown", "metadata": { "id": "vITh0KVJ10qX" }, "source": [ "\n", "### Data Prep\n", "We now use the Alpaca dataset from [yahma](https://huggingface.co/datasets/yahma/alpaca-cleaned), which is a filtered version of 52K of the original [Alpaca dataset](https://crfm.stanford.edu/2023/03/13/alpaca.html). You can replace this code section with your own data prep.\n", "\n", "**[NOTE]** To train only on completions (ignoring the user's input) read TRL's docs [here](https://huggingface.co/docs/trl/sft_trainer#train-on-completions-only).\n", "\n", "**[NOTE]** Remember to add the **EOS_TOKEN** to the tokenized output!! Otherwise you'll get infinite generations!\n", "\n", "If you want to use the `llama-3` template for ShareGPT datasets, try our conversational [notebook](https://colab.research.google.com/drive/1XamvWYinY6FOSX9GLvnqSjjsNflxdhNc?usp=sharing).\n", "\n", "For text completions like novel writing, try this [notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing)." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 177, "referenced_widgets": [ "2f43333451054b3a81602f20b9ad1000", "6010e32dd91047c48afbe13eb6de3595", "ec7c2d0e03d444378dd96ac90b0ee759", "ceff5d4cb783406299fa8c136415976e", "71827793051c44f0b8aa7737e5dda0dc", "93ca967d9f5a43cc95e06bdaab341963", "36a5d81916ad4194a1fc169037d53b45", "cc17de8b4a9e4036bcbe53e45f62dffb", "c453f53075924ba0aca4dab0d00275a7", "fa9e20693fa949d08dc4d3fba7f7bda2", "9efefaf8db344b1b9519b91241481ce3", "b15a4d3ba9844225a6877a1d0e9336b3", "0e6babfc1b2e4221b01d01689602db9b", "2765ffc2c0214b9e9f79b283d80446e2", "b9b55b7bbc0249b2999fb08d0251db9d", "325a003f04d843df94a1b2e0312e06b1", "fe0d8b18c38b484794f72d3f2a4ee32f", "c8a2974771d842e4b748fdfffb9eb919", "b2f9babcc6b842358b5fe5b4ba335ad7", "286538ed23864103947bc362e677c70c", "1e28af0eb5e342b086e52f6258df3633", "46fe84463f14477c89005a6532e8d10d", "38adcfbc030748719d9e37b11813b9dd", "64b89675756045a78e3cc183b1f72d8d", "098c94df9b8c441dad02ff8b309e208f", "e6c84d21b17b4615ae6af39a13cff09e", "a2a924461a064c09a8aa83a903513aca", "7854886150904782afcc5ffccdfc72a0", "b615d3fa5dc941ff96c943540a2c3870", "6f668baf9d3f4e0f85d1a5023da1af27", "843984997cef4c0f9999445c195d7f73", "cb4f137b6f024656b534f8e51ffe50d0", "185467553ac9481eb6d56c73f0ee4d61", "b6de4ef3f2164f92a86bc5777dfa5658", "4bb0887e1b7c4f38a510fb36ad9fb58d", "200170cca8464e5aa25038605de5c813", "e423c91917404fe3b6ef0a31d764abc8", "f2d71e9ec7ab443292ddfe62f779a06a", "6de52155b5084ca3b5a22138f2afca22", "963e4bbd59c5467388dbb3f534e26ef6", "444ee42f095c47fda7b13587c4d82bec", "9df36cfe31784b999419c9a8682c7d9b", "975d0500395b40f397e535794ecb9efb", "d6c914e55ba542dba80852641777ffba", "9f34edb2574d4554b5b6deac493ea171", "6485e9a7ff4c4ee0be483103eb169856", "05543bc6c3b2464db3ae37a062ff35cc", "9575228eb4b34611be18725c11606205", "ded47baa43b54346bc38030f6b306bba", "82db31fa6e494025bdb7391d8559fadf", "2f96eaa6ecef4744af8829f92875750b", "d254414578a84b7d89051f485fa04bd5", "7eb27d398a33469f928ebc2ff666c276", "81a2fe74684f42d6ac6c86e0b788ab9f", "89c7b4190ae047ac82eb432c3812b80b" ] }, "id": "LjY75GoYUCB8", "outputId": "871d3a98-6b6f-4b4e-f491-3187ddb50813" }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "2f43333451054b3a81602f20b9ad1000", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading readme: 0%| | 0.00/450 [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "b15a4d3ba9844225a6877a1d0e9336b3", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading data: 0%| | 0.00/158M [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "38adcfbc030748719d9e37b11813b9dd", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Downloading data: 0%| | 0.00/144M [00:00, ?B/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "b6de4ef3f2164f92a86bc5777dfa5658", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Generating train split: 0%| | 0/172026 [00:00, ? examples/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "9f34edb2574d4554b5b6deac493ea171", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Map: 0%| | 0/172026 [00:00, ? examples/s]" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "alpaca_prompt = \"\"\"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n", "\n", "### Instruction:\n", "{}\n", "\n", "### Input:\n", "{}\n", "\n", "### Response:\n", "{}\"\"\"\n", "\n", "EOS_TOKEN = tokenizer.eos_token # Must add EOS_TOKEN\n", "def formatting_prompts_func(examples):\n", " instructions = examples[\"instruction\"]\n", " inputs = examples[\"input\"]\n", " outputs = examples[\"output\"]\n", " texts = []\n", " for instruction, input, output in zip(instructions, inputs, outputs):\n", " # Must add EOS_TOKEN, otherwise your generation will go on forever!\n", " text = alpaca_prompt.format(instruction, input, output) + EOS_TOKEN\n", " texts.append(text)\n", " return { \"text\" : texts, }\n", "pass\n", "\n", "from datasets import load_dataset\n", "dataset = load_dataset(\"BanglaLLM/bangla-alpaca-orca\", split = \"train\")\n", "dataset = dataset.map(formatting_prompts_func, batched = True,)" ] }, { "cell_type": "markdown", "metadata": { "id": "idAEIeSQ3xdS" }, "source": [ "\n", "### Train the model\n", "Now let's use Huggingface TRL's `SFTTrainer`! More docs here: [TRL SFT docs](https://huggingface.co/docs/trl/sft_trainer). We do 60 steps to speed things up, but you can set `num_train_epochs=1` for a full run, and turn off `max_steps=None`. We also support TRL's `DPOTrainer`!" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 67, "referenced_widgets": [ "4c49f0cc71b841deae67b2fc8bcbd4f9", "9311cbe24c50407fa0b6d023cd9fd0af", "f32cd58aa555487692de6123cc084d27", "ac3769f04bdb4471b088d71966709e6a", "11225c6f40d7402baab08988eb2408c5", "9221d8c6e23e4dd3ad4b19ef0d20f6ef", "8f4e746d95b94993bc37367beb8d340f", "14c9396eceaa451b9d10a3a70bd870fd", "8fe7c1dfc36c476482001d2f499027a9", "d93551db6f844b3babe9472210ffa133", "275c797e689e43dca3a43a554f43bab7" ] }, "id": "95_Nn-89DhsL", "outputId": "ba24ce50-6d08-4e0e-ad33-73c767a3603a" }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "4c49f0cc71b841deae67b2fc8bcbd4f9", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Map (num_proc=2): 0%| | 0/172026 [00:00, ? examples/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "max_steps is given, it will override any value given in num_train_epochs\n" ] } ], "source": [ "from trl import SFTTrainer\n", "from transformers import TrainingArguments\n", "from unsloth import is_bfloat16_supported\n", "\n", "trainer = SFTTrainer(\n", " model = model,\n", " tokenizer = tokenizer,\n", " train_dataset = dataset,\n", " dataset_text_field = \"text\",\n", " max_seq_length = max_seq_length,\n", " dataset_num_proc = 2,\n", " packing = False, # Can make training 5x faster for short sequences.\n", " args = TrainingArguments(\n", " per_device_train_batch_size = 2,\n", " gradient_accumulation_steps = 4,\n", " warmup_steps = 5,\n", " max_steps = 60,\n", " learning_rate = 2e-4,\n", " fp16 = not is_bfloat16_supported(),\n", " bf16 = is_bfloat16_supported(),\n", " logging_steps = 1,\n", " optim = \"adamw_8bit\",\n", " weight_decay = 0.01,\n", " lr_scheduler_type = \"linear\",\n", " seed = 3407,\n", " output_dir = \"outputs\",\n", " ),\n", ")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "cellView": "form", "colab": { "base_uri": "https://localhost:8080/" }, "id": "2ejIt2xSNKKp", "outputId": "f82cc7f8-b257-4e87-e6bc-2e8580a07cf8" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "GPU = Tesla T4. Max memory = 14.748 GB.\n", "5.594 GB of memory reserved.\n" ] } ], "source": [ "#@title Show current memory stats\n", "gpu_stats = torch.cuda.get_device_properties(0)\n", "start_gpu_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3)\n", "max_memory = round(gpu_stats.total_memory / 1024 / 1024 / 1024, 3)\n", "print(f\"GPU = {gpu_stats.name}. Max memory = {max_memory} GB.\")\n", "print(f\"{start_gpu_memory} GB of memory reserved.\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 1000 }, "id": "yqxqAZ7KJ4oL", "outputId": "1bee7c16-523f-4d18-ba64-b71cce7a54a1" }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1\n", " \\\\ /| Num examples = 172,026 | Num Epochs = 1\n", "O^O/ \\_/ \\ Batch size per device = 2 | Gradient Accumulation steps = 4\n", "\\ / Total batch size = 8 | Total steps = 60\n", " \"-____-\" Number of trainable parameters = 41,943,040\n" ] }, { "data": { "text/html": [ "\n", "Step | \n", "Training Loss | \n", "
---|---|
1 | \n", "0.541900 | \n", "
2 | \n", "0.539500 | \n", "
3 | \n", "0.645700 | \n", "
4 | \n", "0.658600 | \n", "
5 | \n", "0.582600 | \n", "
6 | \n", "0.501800 | \n", "
7 | \n", "0.545300 | \n", "
8 | \n", "0.565200 | \n", "
9 | \n", "0.596400 | \n", "
10 | \n", "0.476300 | \n", "
11 | \n", "0.632000 | \n", "
12 | \n", "0.476300 | \n", "
13 | \n", "0.609900 | \n", "
14 | \n", "0.550300 | \n", "
15 | \n", "0.568000 | \n", "
16 | \n", "0.469300 | \n", "
17 | \n", "0.567600 | \n", "
18 | \n", "0.602300 | \n", "
19 | \n", "0.489600 | \n", "
20 | \n", "0.464300 | \n", "
21 | \n", "0.485100 | \n", "
22 | \n", "0.516300 | \n", "
23 | \n", "0.597500 | \n", "
24 | \n", "0.493700 | \n", "
25 | \n", "0.508600 | \n", "
26 | \n", "0.551900 | \n", "
27 | \n", "0.386700 | \n", "
28 | \n", "0.504400 | \n", "
29 | \n", "0.585600 | \n", "
30 | \n", "0.507200 | \n", "
31 | \n", "0.389000 | \n", "
32 | \n", "0.494800 | \n", "
33 | \n", "0.513800 | \n", "
34 | \n", "0.617400 | \n", "
35 | \n", "0.702300 | \n", "
36 | \n", "0.494700 | \n", "
37 | \n", "0.506600 | \n", "
38 | \n", "0.494600 | \n", "
39 | \n", "0.496900 | \n", "
40 | \n", "0.321300 | \n", "
41 | \n", "0.599100 | \n", "
42 | \n", "0.558100 | \n", "
43 | \n", "0.336700 | \n", "
44 | \n", "0.444500 | \n", "
45 | \n", "0.515100 | \n", "
46 | \n", "0.355000 | \n", "
47 | \n", "0.365200 | \n", "
48 | \n", "0.503300 | \n", "
49 | \n", "0.521500 | \n", "
50 | \n", "0.570200 | \n", "
51 | \n", "0.607700 | \n", "
52 | \n", "0.512800 | \n", "
53 | \n", "0.541300 | \n", "
54 | \n", "0.537500 | \n", "
55 | \n", "0.570200 | \n", "
56 | \n", "0.582000 | \n", "
57 | \n", "0.543300 | \n", "
58 | \n", "0.386400 | \n", "
59 | \n", "0.524000 | \n", "
60 | \n", "0.566700 | \n", "
"
],
"text/plain": [
"