Update README.md
Browse files
README.md
CHANGED
@@ -16,6 +16,8 @@ model-index:
|
|
16 |
name: pass@1
|
17 |
verified: false
|
18 |
---
|
|
|
|
|
19 |
## GOOGLE COLAB IS A SCAM DO NOT USE THE PAID VERSION
|
20 |
## THEY WILL DISCONNECT YOUR RUNTIME BEFORE EVEN 24 HOURS
|
21 |
https://github.com/googlecolab/colabtools/issues/3451
|
@@ -66,6 +68,7 @@ import torch
|
|
66 |
major_version, minor_version = torch.cuda.get_device_capability()
|
67 |
# Must install separately since Colab has torch 2.2.1, which breaks packages
|
68 |
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
|
|
|
69 |
if major_version >= 8:
|
70 |
# Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)
|
71 |
!pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes
|
|
|
16 |
name: pass@1
|
17 |
verified: false
|
18 |
---
|
19 |
+
## Please note this model is a test, the full finetuned version can be found here: https://huggingface.co/rombodawg/Llama-3-8B-Instruct-Coder
|
20 |
+
_______________________________________________________
|
21 |
## GOOGLE COLAB IS A SCAM DO NOT USE THE PAID VERSION
|
22 |
## THEY WILL DISCONNECT YOUR RUNTIME BEFORE EVEN 24 HOURS
|
23 |
https://github.com/googlecolab/colabtools/issues/3451
|
|
|
68 |
major_version, minor_version = torch.cuda.get_device_capability()
|
69 |
# Must install separately since Colab has torch 2.2.1, which breaks packages
|
70 |
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
|
71 |
+
|
72 |
if major_version >= 8:
|
73 |
# Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)
|
74 |
!pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes
|