Norquinal commited on
Commit
7447e16
1 Parent(s): 3c1b4a5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -7,17 +7,17 @@ size_categories:
7
  ---
8
  This dataset is comprised of roleplay chat conversations scraped from several Discord RP fandom servers. The conversations have been split in terms of days, the assumption being that a majority of long-form roleplays are started/continued and completed within a day.
9
 
10
- The original dataset consisted of ~70K samples. Light filtering striped that down to ~6K samples. Stricter filtering striped it down to ~2k samples.
11
 
12
  Some effort was made to remove OOC, links, and other miscellanous fluff, but more work still needs to be done. This isn't a "completed" dataset so much as a test to see if the data gathered is conducive to training LLMs for roleplay purposes. If determined to be useful, I will continue to scrape more data.
13
 
14
  In here are several files:
15
- * `discord_rp_with_token_counts.json` - The original dataset in all its unprocessed glory. ~70k items. Total Average Token Length for all items: ~164.
16
  * `125_tokens_10_messages_discord_rp.json` - Original dataset filtered for an average token length of 125 and a minimum conversation length of 10 messages. Mostly unprocessed.
17
  * `80_tokens_6_messages_discord_rp.json` - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 6 messages. Mostly unprocessed. The latter contains the former, so use one or the other, but not both.
18
  * `80_tokens_3_messages_discord_rp.json` - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 3 messages. Mostly unprocessed. The latter contains the former, so use one or the other, but not both.
19
- * `opencai_rp.json` - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 6 messages, then processed. Contains descriptions of characters, summary, scene, and genre tags provided by `gpt-3.5-turbo-16k`.
20
- * `opencai_rp_metharme.json` - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 6 messages, then processed, filtered to 4800 samples, and converted to metharme format.
21
 
22
  Explanation of Properties:
23
  * `timestamp` - Date of the interaction in YYYY-MM-DD format
 
7
  ---
8
  This dataset is comprised of roleplay chat conversations scraped from several Discord RP fandom servers. The conversations have been split in terms of days, the assumption being that a majority of long-form roleplays are started/continued and completed within a day.
9
 
10
+ The original dataset consisted of ~90K samples. Light filtering striped that down to ~18K samples. Stricter filtering striped it down to ~2k samples.
11
 
12
  Some effort was made to remove OOC, links, and other miscellanous fluff, but more work still needs to be done. This isn't a "completed" dataset so much as a test to see if the data gathered is conducive to training LLMs for roleplay purposes. If determined to be useful, I will continue to scrape more data.
13
 
14
  In here are several files:
15
+ * `discord_rp_with_token_counts.json` - The original dataset in all its unprocessed glory. ~90k items. Total Average Token Length for all items: ~164.
16
  * `125_tokens_10_messages_discord_rp.json` - Original dataset filtered for an average token length of 125 and a minimum conversation length of 10 messages. Mostly unprocessed.
17
  * `80_tokens_6_messages_discord_rp.json` - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 6 messages. Mostly unprocessed. The latter contains the former, so use one or the other, but not both.
18
  * `80_tokens_3_messages_discord_rp.json` - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 3 messages. Mostly unprocessed. The latter contains the former, so use one or the other, but not both.
19
+ * `opencai_rp.json` - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed. Contains descriptions of characters, summary, scene, and genre tags provided by `gpt-3.5-turbo-16k`.
20
+ * `opencai_rp_metharme.json` - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed, filtered to 1229 samples, and converted to metharme format.
21
 
22
  Explanation of Properties:
23
  * `timestamp` - Date of the interaction in YYYY-MM-DD format