Update README.md
Browse files
README.md
CHANGED
@@ -188,8 +188,7 @@ Thank you to all my generous patrons and donaters!
|
|
188 |
|
189 |
# Original model card: NousResearch's Redmond Puffin 13B
|
190 |
|
191 |
-
|
192 |
-
|
193 |
|
194 |
## **Redmond-Puffin-13b (Currently available as a Preview edition)**
|
195 |
|
@@ -221,13 +220,15 @@ The model follows the Vicuna ShareGPT prompt format:
|
|
221 |
|
222 |
## Notable Features:
|
223 |
|
224 |
-
-
|
|
|
|
|
225 |
|
226 |
-
- Pretrained on 2 trillion tokens of text.
|
227 |
|
228 |
- Pretrained with a context length of 4096 tokens, and fine-tuned on a significant amount of multi-turn conversations reaching that full token limit.
|
229 |
|
230 |
-
-
|
231 |
|
232 |
## Current Limitations
|
233 |
|
|
|
188 |
|
189 |
# Original model card: NousResearch's Redmond Puffin 13B
|
190 |
|
191 |
+
![puffin](https://i.imgur.com/R2xTHMb.png)
|
|
|
192 |
|
193 |
## **Redmond-Puffin-13b (Currently available as a Preview edition)**
|
194 |
|
|
|
220 |
|
221 |
## Notable Features:
|
222 |
|
223 |
+
- The first Llama-2 based fine-tuned model released by Nous Research.
|
224 |
+
|
225 |
+
- Ability to recall information from upto late 2022 without internet. (ChatGPT cut off date is in 2021)
|
226 |
|
227 |
+
- Pretrained on 2 trillion tokens of text. (This is double the amount of most Open LLM's)
|
228 |
|
229 |
- Pretrained with a context length of 4096 tokens, and fine-tuned on a significant amount of multi-turn conversations reaching that full token limit.
|
230 |
|
231 |
+
- The first commercially available language model released by Nous Research.
|
232 |
|
233 |
## Current Limitations
|
234 |
|