Update README.md
Browse files
README.md
CHANGED
@@ -5,6 +5,8 @@ tags:
|
|
5 |
license: apache-2.0
|
6 |
metrics:
|
7 |
- loss
|
|
|
|
|
8 |
---
|
9 |
## QLoRA weights using Llama-2-7b for the Code Alpaca Dataset
|
10 |
|
@@ -14,7 +16,7 @@ I fine-tuned base Llama-2-7b using LoRA with 4 bit quantization on a single T4 G
|
|
14 |
Dataset: https://github.com/sahil280114/codealpaca
|
15 |
|
16 |
To use these weights:
|
17 |
-
```
|
18 |
from peft import PeftModel, PeftConfig
|
19 |
from transformers import AutoModelForCausalLM
|
20 |
|
|
|
5 |
license: apache-2.0
|
6 |
metrics:
|
7 |
- loss
|
8 |
+
language:
|
9 |
+
- en
|
10 |
---
|
11 |
## QLoRA weights using Llama-2-7b for the Code Alpaca Dataset
|
12 |
|
|
|
16 |
Dataset: https://github.com/sahil280114/codealpaca
|
17 |
|
18 |
To use these weights:
|
19 |
+
```python
|
20 |
from peft import PeftModel, PeftConfig
|
21 |
from transformers import AutoModelForCausalLM
|
22 |
|