Update README.md
Browse files
README.md
CHANGED
@@ -45,12 +45,12 @@ To use these files you need:
|
|
45 |
|
46 |
Example command:
|
47 |
```
|
48 |
-
|
49 |
```
|
50 |
|
51 |
There is no CUDA support at this time, but it should be coming soon.
|
52 |
|
53 |
-
There is no support in third-party UIs or Python libraries (llama-cpp-python, ctransformers) yet. That will come in due course.
|
54 |
|
55 |
## Repositories available
|
56 |
|
@@ -64,6 +64,8 @@ There is no support in third-party UIs or Python libraries (llama-cpp-python, ct
|
|
64 |
{prompt}
|
65 |
```
|
66 |
|
|
|
|
|
67 |
<!-- compatibility_ggml start -->
|
68 |
## Compatibility
|
69 |
|
|
|
45 |
|
46 |
Example command:
|
47 |
```
|
48 |
+
./main -m llama-2-70b/ggml/llama-2-70b.ggmlv3.q4_0.bin -gqa 8 -t 13 -p "Llamas are"
|
49 |
```
|
50 |
|
51 |
There is no CUDA support at this time, but it should be coming soon.
|
52 |
|
53 |
+
There is no support in third-party UIs (eg. text-generation-webui, KoboldCpp), or Python libraries (llama-cpp-python, ctransformers) yet. That will come in due course.
|
54 |
|
55 |
## Repositories available
|
56 |
|
|
|
64 |
{prompt}
|
65 |
```
|
66 |
|
67 |
+
**Remeber that this is a foundation model, not a fine tuned one. It may not be good at answering questions or following instructions.
|
68 |
+
|
69 |
<!-- compatibility_ggml start -->
|
70 |
## Compatibility
|
71 |
|