Update README.md
Browse files
README.md
CHANGED
@@ -30,7 +30,9 @@ This model was fine-tuned using [Predibase](https://predibase.com/), the first l
|
|
30 |
I fine-tuned base Llama-2-7b using LoRA with 4 bit quantization on a single T4 GPU, which cost approximately $3 to train
|
31 |
on Predibase.
|
32 |
|
33 |
-
Dataset: https://github.com/sahil280114/codealpaca
|
|
|
|
|
34 |
|
35 |
To use these weights:
|
36 |
```python
|
|
|
30 |
I fine-tuned base Llama-2-7b using LoRA with 4 bit quantization on a single T4 GPU, which cost approximately $3 to train
|
31 |
on Predibase.
|
32 |
|
33 |
+
Dataset and training parameters are borrowed from: https://github.com/sahil280114/codealpaca,
|
34 |
+
but all of these parameters including DeepSpeed can be directly used with [Ludwig](https://ludwig.ai/latest/), the open-source
|
35 |
+
toolkit for LLMs that Predibase is built on.
|
36 |
|
37 |
To use these weights:
|
38 |
```python
|