Spaces:
Running
on
Zero
Running
on
Zero
Update app.py
Browse files
app.py
CHANGED
@@ -12,14 +12,9 @@ DEFAULT_MAX_NEW_TOKENS = 1024
|
|
12 |
MAX_INPUT_TOKEN_LENGTH = int(os.getenv("MAX_INPUT_TOKEN_LENGTH", "4096"))
|
13 |
|
14 |
DESCRIPTION = """\
|
15 |
-
#
|
16 |
|
17 |
-
This Space demonstrates model [
|
18 |
-
|
19 |
-
🔎 For more details about the Llama 2 family of models and how to use them with `transformers`, take a look [at our blog post](https://huggingface.co/blog/llama2).
|
20 |
-
|
21 |
-
🔨 Looking for an even more powerful model? Check out the large [**70B** model demo](https://huggingface.co/spaces/ysharma/Explore_llamav2_with_TGI).
|
22 |
-
🐇 For a smaller model that you can run on many GPUs, check our [7B model demo](https://huggingface.co/spaces/huggingface-projects/llama-2-7b-chat).
|
23 |
|
24 |
"""
|
25 |
|
@@ -27,8 +22,7 @@ LICENSE = """
|
|
27 |
<p/>
|
28 |
|
29 |
---
|
30 |
-
|
31 |
-
this demo is governed by the original [license](https://huggingface.co/spaces/huggingface-projects/llama-2-13b-chat/blob/main/LICENSE.txt) and [acceptable use policy](https://huggingface.co/spaces/huggingface-projects/llama-2-13b-chat/blob/main/USE_POLICY.md).
|
32 |
"""
|
33 |
|
34 |
if not torch.cuda.is_available():
|
|
|
12 |
MAX_INPUT_TOKEN_LENGTH = int(os.getenv("MAX_INPUT_TOKEN_LENGTH", "4096"))
|
13 |
|
14 |
DESCRIPTION = """\
|
15 |
+
# Mera Mixture Chat
|
16 |
|
17 |
+
This Space demonstrates model [mera-mix-4x7B](https://huggingface.co/meraGPT/mera-mix-4x7B) by meraGPT, feel free to play with it!
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
"""
|
20 |
|
|
|
22 |
<p/>
|
23 |
|
24 |
---
|
25 |
+
created by https://meraGPT.com
|
|
|
26 |
"""
|
27 |
|
28 |
if not torch.cuda.is_available():
|