File size: 1,056 Bytes
b6a5c87 5d58c65 ef91075 5d58c65 b6a5c87 5d58c65 8411e7f ef91075 b6a5c87 5d58c65 d9a21d0 f34b018 5eae32f 5d58c65 8411e7f 81a1a9b 5d58c65 4696f7d ce7047c 838997f 050fd2b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
---
language:
- en
- zh
- fr
- es
- de
- pt
- ru
- it
- ja
- ko
- vi
- ar
tags:
- pytorch
- text-generation
- causal-lm
- rwkv
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- EleutherAI/pile
---
# RWKV-5 World (Training in Progress)
## I am now updating latest checkpts to https://huggingface.co/BlinkDL/temp (to avoid bloating git history)
## I am now updating latest checkpts to https://huggingface.co/BlinkDL/temp (to avoid bloating git history)
Use rwkv pip package 0.8.14+ for RWKV-5 inference: https://pypi.org/project/rwkv/
GUI: https://github.com/josStorer/RWKV-Runner (see Releases)
How it works: https://twitter.com/BlinkDL_AI/status/1685230712247795713
## Model Description
RWKV-5 trained on 100+ world languages (70% English, 15% multilang, 15% code).
World = Some_Pile + Some_SlimPajama + Some_StarCoder + Some_OSCAR + All_Wikipedia + All_ChatGPT_Data_I_can_find
RWKV-5 training: set --my_testing "r4" in latest RWKV-LM v4neo
World v1 = 0.59T tokens
World v2 = 1.12T tokens
Imagine what happens when we use more data :) |