Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
mike-conover-db commited on
Commit
a9f8353
1 Parent(s): c736d77

Initial commit.

Browse files
Files changed (1) hide show
  1. README.md +35 -0
README.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # Summary
5
+ `databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousnads of Databricks empoyees in several
6
+ of the behavioral categories outlined in the InstructGPT paper, including brainstorming, classification, closed QA, generation, information
7
+ extraction, open QA, and summarization.
8
+
9
+ [LEGAL]
10
+ This dataset is licensed for commercial use ...
11
+
12
+ # Dataset Overview
13
+ Databricks employees were invited to create prompt / response pairs using Google Forms in each of eight different instruction categories, including the seven
14
+ outlined above as well as an open-ended free-form category. Employees were instructed to avoid using information from any source on the web with
15
+ the exception of Wikipedia, and explicitly instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were
16
+ provided to motivate the types of questions and instructions appropriate to each category.
17
+
18
+ Halfway through the data generation process, participants were given the option of answering questions posed by other employees. They were
19
+ asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
20
+
21
+ For questions in the XYZ categories employees were asked to provide a reference passage copied from Wikipedia. This text may contain bracketed
22
+ citations which we recommend users remove for downstream applications.
23
+
24
+ # Intended Uses
25
+ While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts,
26
+ this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the [Self-Instruct](https://arxiv.org/abs/2212.10560) paper.
27
+ For example, employee-generated prompts could be submitted as few-shot examples to a large open language model to generate a corpus of millions of examples of
28
+ instructions in each of the respective InstructGPT categories.
29
+
30
+ Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to restate each prompt or short responses, with the
31
+ resulting text associated to the respective ground-truth sample. Such an approach might provide a form of regularization on the dataset that could allow for
32
+ more robust instruction-following behavior in models derived from these synthetic datasets.
33
+
34
+ # Dataset Limitations
35
+ [LEGAL]