HCZhang commited on
Commit
3b22343
1 Parent(s): 17e546e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -110
README.md CHANGED
@@ -6,7 +6,7 @@ language:
6
  ---
7
  # Jellyfish-0-13B
8
  <!-- Provide a quick summary of what the model is/does. -->
9
- <img src="https://i.imgur.com/d8Bl04i.png" alt="PicToModel" width="350"/>
10
 
11
  ## Model Details
12
  Jellyfish is a model designed specifically for data preprocessing tasks, including entity matching, data imputation, error detection, and schema matching.
@@ -17,147 +17,74 @@ Its performance is competitive, standing up well against prior state-of-the-art
17
 
18
  We have released two versions of Jellyfish: the main version and a variant called "Jellyfish-reasoning."
19
 
20
- As the names suggest, the main version focuses on providing accurate, direct answers. In contrast, Jellyfish-reasoning can generate not only the final answer to a data processing problem but also the reasoning behind its decision.
21
-
22
- ## Prompt Template
23
- ```
24
- ### Instruction:
25
-
26
- <prompt> (without the <>)
27
-
28
- ### Response:
29
- ```
30
-
31
- ### Model Description
32
-
33
- <!-- Provide a longer summary of what this model is. -->
34
-
35
-
36
 
37
  - **Developed by:** Haochen Zhang, Yuyang Dong, Chuan Xiao, Masafumi Oyamada
38
  - **Funded by:** NEC Corporation, Osaka University
39
- - **Model type:** [More Information Needed]
40
  - **Language(s) (NLP):** English
41
  - **License:** cc-by-4.0
42
  - **Finetuned from model:** [Open-Orca/OpenOrca-Platypus2-13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)
43
 
44
- ## Uses
45
-
46
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
47
-
48
- ### Direct Use
49
-
50
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
51
-
52
- [More Information Needed]
53
-
54
-
55
- ## Bias, Risks, and Limitations
56
-
57
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
58
-
59
- [More Information Needed]
60
-
61
- ### Recommendations
62
-
63
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
64
-
65
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
66
-
67
- ## How to Get Started with the Model
68
 
69
- Use the code below to get started with the model.
70
 
71
- [More Information Needed]
 
72
 
73
  ## Training Details
74
 
75
  ### Training Data
 
76
 
77
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
78
-
79
- [More Information Needed]
80
-
81
- ### Training Procedure
82
-
83
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
84
-
85
- #### Preprocessing [optional]
86
-
87
- [More Information Needed]
88
-
89
-
90
- #### Training Hyperparameters
91
-
92
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
93
-
94
- #### Speeds, Sizes, Times [optional]
95
 
96
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
97
 
98
- [More Information Needed]
99
-
100
- ## Evaluation
101
-
102
- <!-- This section describes the evaluation protocols and provides the results. -->
103
-
104
- ### Testing Data, Factors & Metrics
105
-
106
- #### Testing Data
107
 
108
- <!-- This should link to a Dataset Card if possible. -->
109
 
110
- [More Information Needed]
111
-
112
- #### Factors
113
-
114
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
115
-
116
- [More Information Needed]
117
-
118
- #### Metrics
119
-
120
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
121
-
122
- [More Information Needed]
123
-
124
- ### Results
125
-
126
- [More Information Needed]
127
-
128
- #### Summary
129
 
130
- #### Software
 
131
 
132
- [hiyouga/LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) for fine-tuning.
 
 
133
 
134
- [vLLM](https://github.com/vllm-project/vllm) for inference.
135
 
136
- ## Model Examination [optional]
137
 
138
- <!-- Relevant interpretability work for the model goes here -->
 
139
 
140
- [More Information Needed]
 
 
141
 
142
- ## Environmental Impact
143
 
144
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
145
 
146
- <!--Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
147
 
148
- - **Hardware Type:** [More Information Needed]
149
- - **Hours used:** [More Information Needed]
150
- - **Cloud Provider:** [More Information Needed]
151
- - **Compute Region:** [More Information Needed]
152
- - **Carbon Emitted:** [More Information Needed]
153
 
154
- #### Hardware
155
 
156
- [More Information Needed]-->
157
 
 
 
158
 
 
159
 
160
- ## Citation [optional]
161
 
162
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
163
 
@@ -168,7 +95,6 @@ Use the code below to get started with the model.
168
  booktitle = {arXiv:2205.09911},
169
  year = {2022}
170
  }
171
-
172
  @software{hunterlee2023orcaplaty1
173
  title = {OpenOrcaPlatypus: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset and Merged with divergent STEM and Logic Dataset Model},
174
  author = {Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz and Bleys Goodson and Wing Lian and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
 
6
  ---
7
  # Jellyfish-0-13B
8
  <!-- Provide a quick summary of what the model is/does. -->
9
+ <img src="https://i.imgur.com/d8Bl04i.png" alt="PicToModel" width="330"/>
10
 
11
  ## Model Details
12
  Jellyfish is a model designed specifically for data preprocessing tasks, including entity matching, data imputation, error detection, and schema matching.
 
17
 
18
  We have released two versions of Jellyfish: the main version and a variant called "Jellyfish-reasoning."
19
 
20
+ As the names suggest, the main version focuses on providing accurate, direct answers. In contrast, Jellyfish-reasoning can generate not only the final answer to data processing queries but also the reasons behind its decisions.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
  - **Developed by:** Haochen Zhang, Yuyang Dong, Chuan Xiao, Masafumi Oyamada
23
  - **Funded by:** NEC Corporation, Osaka University
 
24
  - **Language(s) (NLP):** English
25
  - **License:** cc-by-4.0
26
  - **Finetuned from model:** [Open-Orca/OpenOrca-Platypus2-13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)
27
 
28
+ ## Prompt Template
29
+ ```
30
+ ### Instruction:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
+ <prompt> (without the <>)
33
 
34
+ ### Response:
35
+ ```
36
 
37
  ## Training Details
38
 
39
  ### Training Data
40
+ We utilized the training and validation sets of datasets for 4 data processing tasks from the paper [Can Foundation Models Wrangle Your Data?](https://arxiv.org/abs/2205.09911) to fine-tune both versions of Jellyfish.
41
 
42
+ The original datasets can be accessed from their GitHub project page [HazyResearch/fm_data_tasks](https://github.com/HazyResearch/fm_data_tasks)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
+ Through meticulous prompt-engineering, we constructed our datasets suitable for fine-tuning LLM, mirroring the style of [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
45
 
46
+ ### Training Method
 
 
 
 
 
 
 
 
47
 
48
+ We used LoRA to speed up the training process, targeting the q_proj and v_proj modules.
49
 
50
+ ## Uses
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
53
+ Here are the prompts we used for both fine-tuning the model and for inference. Feel free to explore different prompts on your own to achieve the best generation quality.
54
 
55
+ ### For JellyFish-main
56
+ ```
57
+ You are tasked with determining whether two records listed below are the same based on the information provided. Carefully compare the {attribute 1}, {attribute 2}... for each record before making your decision.
58
 
59
+ Note: Missing values (N/A or \"nan\") should not be used as a basis for your decision.
60
 
61
+ Record A: [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}...]\nProduct B: [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}...]
62
 
63
+ Are record A and record B the same entity? Choose your answer from: [Yes, No]
64
+ ```
65
 
66
+ ### For JellyFish-reasoning
67
+ ```
68
+ You are tasked with determining whether two products listed below are the same based on the information provided. Carefully examine all the attributes before making your decision.
69
 
70
+ Note: Missing values (N/A or \"nan\") should not be used as a basis for your decision.
71
 
72
+ Record A: [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}...]\nProduct B: [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}...]
73
 
74
+ Are record A and record B the same entity?
75
 
76
+ After your reasoning, finish your response in a separate line with and ONLY with your final answer. Choose your final answer from [Yes, No].",
 
 
 
 
77
 
78
+ ```
79
 
80
+ ## Bias, Risks, and Limitations
81
 
82
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
83
+ As of now, we've tested Jellyfish exclusively with the test sets from the benchmark datasets mentioned earlier.
84
 
85
+ We're in the process of assessing its performance on additional datasets.
86
 
87
+ ## Citation
88
 
89
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
90
 
 
95
  booktitle = {arXiv:2205.09911},
96
  year = {2022}
97
  }
 
98
  @software{hunterlee2023orcaplaty1
99
  title = {OpenOrcaPlatypus: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset and Merged with divergent STEM and Logic Dataset Model},
100
  author = {Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz and Bleys Goodson and Wing Lian and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},