File size: 3,238 Bytes
f07f3fd
 
455822d
 
 
 
405d0cd
e95312e
405d0cd
 
 
f8cdcd5
 
 
 
 
 
9723588
868e9e9
 
b32310b
868e9e9
9105f2a
da0c917
 
a7cbfe2
 
 
 
7447e16
 
303548c
 
 
455822d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
license: cc-by-nc-4.0
language:
- en
size_categories:
- 10K<n<100K
configs:
- config_name: default
  data_files: "discord_logs.json"
- config_name: unsquashed
  data_files: "discord_logs_unsquashed.json"
- config_name: two_users
  data_files: "discord_logs_two_users.json"
- config_name: split_threads
  data_files: "discord_logs_split_threads.json"
- config_name: anonymized
  data_files: "discord_logs_anonymized.json"
---
This dataset is comprised of roleplay chat conversations scraped from several Discord RP fandom servers. The conversations have been split in terms of days, the assumption being that a majority of long-form roleplays are started/continued and completed within a day. 

The original dataset consisted of ~90K samples. Light filtering striped that down to ~18K samples. Stricter filtering striped it down to ~8k samples. Strictest filtering striped it down to ~2k samples.

Some effort was made to remove OOC, links, and other miscellanous fluff, but more work still needs to be done. This isn't a "completed" dataset so much as a test to see if the data gathered is conducive to training LLMs for roleplay purposes. If determined to be useful, I will continue to scrape more data.

In here are several files:
* `discord_rp_with_token_counts.json` - The original dataset in all its unprocessed glory. ~90k items. Total Average Token Length for all items: ~143 tokens.
* `125_tokens_10_messages_discord_rp.json` (Strictest) - Original dataset filtered for an average token length of 125 and a minimum conversation length of 10 messages. Mostly unprocessed. Average Length: 205 tokens.
* `80_tokens_6_messages_discord_rp.json` (Stricter) - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 6 messages. Mostly unprocessed. Average Length: 181 tokens. The latter contains the former, so use one or the other, but not both.
* `80_tokens_3_messages_discord_rp.json` (Light) - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 3 messages. Mostly unprocessed. Average Length: 202 tokens. The latter contains the former, so use one or the other, but not both.
* `opencai_rp.json` - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed. Contains descriptions of characters, summary, scene, and genre tags provided by `gpt-3.5-turbo-16k`.
* `opencai_rp_metharme.json` - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed, filtered to 1229 samples, and converted to metharme format.

Explanation of Properties:
* `timestamp` - Date of the interaction in YYYY-MM-DD format
* `conversations`: The conversation between the users in the chat. This is represented as a list of dictionaries, each dictionary representing a single utterance and containing three key-value pairs: `message`, referring to the utterance itself, `author` referring to their Discord username, and `is_bot`, which designates whether the message was sent by a human or a bot. `is_bot` was determined by checking if author still had a discriminator and, therefore, isn't 100% accurate.