File size: 28,271 Bytes
74a7054
af1b083
fdf24d3
74a7054
 
af1b083
6fdce9f
74a7054
51ec205
3b22343
51ec205
b8414c1
51ec205
f036f7b
077a231
 
0523dae
b594a0a
f036f7b
17e546e
74a7054
077a231
18e92a9
249253a
 
2223231
b722580
18e92a9
5d17686
a626d77
fd9f8a3
a626d77
 
077a231
 
a626d77
a3886f0
cbfa0b5
2080460
 
 
 
 
 
 
958b1ca
 
 
 
 
 
 
 
 
 
 
 
cbfa0b5
 
d29d19e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cbfa0b5
077a231
d29d19e
cbfa0b5
ecf2eab
077a231
 
 
 
 
bdb2316
 
cbfa0b5
 
 
 
 
d29d19e
 
 
cbfa0b5
8de1955
2080460
cbfa0b5
17e546e
cbfa0b5
bdb2316
d29d19e
 
 
 
cbfa0b5
8de1955
74a7054
2080460
74a7054
3b22343
 
 
74a7054
3b22343
74a7054
3b22343
 
74a7054
 
 
 
baaca29
0a3ec85
077a231
74a7054
3b22343
74a7054
84e7dc0
74a7054
3b22343
74a7054
077a231
3b22343
1aede27
 
babbb41
1aede27
 
4d96901
 
1aede27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4d96901
1aede27
 
ffabe3e
 
1aede27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ffabe3e
1aede27
 
 
077a231
249253a
077a231
17e546e
bdb2316
 
0dc1232
 
 
bdb2316
 
 
 
 
0dc1232
bdb2316
0dc1232
bdb2316
 
 
0dc1232
bdb2316
 
 
0dc1232
bdb2316
077a231
 
 
 
 
 
 
 
 
bdb2316
 
 
 
 
0dc1232
 
 
bdb2316
077a231
 
 
 
 
 
 
 
 
74a7054
cbfa0b5
 
 
 
 
 
 
 
 
fd9f8a3
bdb2316
0dc1232
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
077a231
 
 
 
 
 
 
 
 
 
bdb2316
0dc1232
 
 
 
 
 
 
 
077a231
 
 
 
 
 
 
 
 
 
74a7054
fd9f8a3
60ab4d6
0dc1232
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74a7054
b07126b
3b22343
74a7054
b07126b
3b22343
74a7054
3b22343
18e92a9
74a7054
1af3c5d
74a7054
18e92a9
1af3c5d
74a7054
17e546e
 
 
 
 
 
 
18e92a9
17e546e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18e92a9
 
17e546e
 
74a7054
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17e546e
74a7054
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500

---
license: cc-by-nc-4.0
language:
- en
---
# Jellyfish-13B
<!-- Provide a quick summary of what the model is/does. -->
<!--
<img src="https://i.imgur.com/d8Bl04i.png" alt="PicToModel" width="330"/>
-->
<img src="https://i.imgur.com/E1vqCIw.png" alt="PicToModel" width="330"/>


We also build [Jellyfish-7B](https://huggingface.co/NECOUDBFM/Jellyfish-7B) and [Jellyfish-8B](https://huggingface.co/NECOUDBFM/Jellyfish-8B), lighter versions of Jellyfish!\
They retain excellent data propcessing performance while delivering faster inference speed and better reasoning ability!

😄 We strongly **recommend users to use the 7B and 8B models** for their generalizability to unseen tasks and reasoning ability!


## Model Details
Jellyfish-13B is a large language model with 13 billion parameters. It is tailored specifically for data preprocessing tasks, including error detection, data imputation, schema matching, and entity matching.

We fine-tuned the [Open-Orca/OpenOrca-Platypus2-13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B) model using the datasets pertinent to data preprocessing tasks.
Its performance is competitive, rivaling previous state-of-the-art algorithms and LLMs such as OpenAI's GPT 3.5 and GPT 4 ([as demonstrated in our earlier studies](https://arxiv.org/abs/2308.16361)).
It is notable that, as a 13B model, Jellyfish allows for cost-effective local execution without compromising data security. 
Additionally, its proficiency in handling data preprocessing tasks means that Jellyfish maintains strong performance in NLP tasks as an LLM, as evidenced by the NLP benchmark scores comparison between [Jellyfish](https://llm.extractum.io/model/NECOUDBFM%2FJellyfish,G7X4BD9tZSOCFxjGcY3kY) and [OpenOraca-Platypus2](https://llm.extractum.io/model/Open-Orca%2FOpenOrca-Platypus2-13B,5FA1mWWk7x7RRbnn3bat5s).

We release two distinct versions of Jellyfish: Jellyfish-13B (the main branch) and Jellyfish-13B-Interpreter (the alternative branch).
As the names suggest, Jellyfish-13B is tailored to deliver precise, straightforward answers.
In contrast, Jellyfish-13B-Interpreter, is fine-tuned with data that includes reasoning and sequential thought processes for handling data preprocessing tasks, distilling knowledge from GPT-4.

The two versions are designed for different application scenarios.
Jellyfish-13B is suitable for integration into larger data management systems due to its simple and clear responses that can be easily transformed into codes in a data management/analysis pipeline.
On the other hand, Jellyfish-13B-Interpreter is more user-oriented, with responses that provide in-depth data insights without the necessity for advanced coding skills or an intricate grasp of statistics.

More details about the model can be found in the [Jellyfish paper](https://arxiv.org/abs/2312.01678).

- **Developed by:** Haochen Zhang, Yuyang Dong, Chuan Xiao, Masafumi Oyamada  
- **Contact: [email protected]**  
- **Funded by:** NEC Corporation, Osaka University  
- **Language(s) (NLP):** English  
- **License:** Non-Commercial Creative Commons license (CC BY-NC-4.0)  
- **Finetuned from model:** [Open-Orca/OpenOrca-Platypus2-13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)

## Citation
If you find our work useful, please give us credit by citing:

```
@article{zhang2023jellyfish,
  title={Jellyfish: A Large Language Model for Data Preprocessing},
  author={Zhang, Haochen and Dong, Yuyang and Xiao, Chuan and Oyamada, Masafumi},
  journal={arXiv preprint arXiv:2312.01678},
  year={2023}
}
```

## Performance on seen tasks

| Task            | Type   | Dataset           | Non-LLM SoTA<sup>1</sup> | GPT-3.5<sup>2</sup> | GPT-4<sup>2</sup>  | GPT-4o | Table-GPT | Jellyfish-7B | Jellyfish-8B | Jellyfish-13B |
|-----------------|--------|-------------------|-----------------|--------|--------|--------|-----------|--------------|--------------|---------------|
| Error Detection | Seen   | Adult             | *99.10*         | 99.10  | 92.01  | 83.58  | --        | 77.40        | 73.74        | **99.33**     |
| Error Detection | Seen   | Hospital          | 94.40           | **97.80** | 90.74  | 44.76  | --        | 94.51        | 93.40        | *95.59*       |
| Error Detection | Unseen | Flights           | 81.00           | --     | **83.48** | 66.01  | --        | 69.15        | 66.21        | *82.52*       |
| Error Detection | Unseen | Rayyan            | 79.00           | --     | *81.95* | 68.53  | --        | 75.07        | 81.06        | **90.65**     |
| Data Imputation | Seen   | Buy               | 96.50           | 98.50  | **100** | **100** | --        | 98.46        | 98.46        | **100**       |
| Data Imputation | Seen   | Restaurant        | 77.20           | 88.40  | **97.67** | 90.70  | --        | 89.53        | 87.21        | 89.53         |
| Data Imputation | Unseen | Flipkart          | 68.00           | --     | **89.94** | 83.20  | --        | 87.14        | *87.48*      | 81.68         |
| Data Imputation | Unseen | Phone             | 86.70           | --     | **90.79** | 86.78  | --        | 86.52        | 85.68        | *87.21*       |
| Schema Matching | Seen   | MIMIC-III         | 20.00           | --     | 40.00   | 29.41  | --        | **53.33**    | *45.45*      | 40.00         |
| Schema Matching | Seen   | Synthea           | 38.50           | 45.20  | **66.67** | 6.56   | --        | 55.56        | 47.06        | 56.00         |
| Schema Matching | Unseen | CMS               | *50.00*         | --     | 19.35   | 22.22  | --        | 42.86        | 38.10        | **59.29**     |
| Entity Matching | Seen   | Amazon-Google     | 75.58           | 63.50  | 74.21  | 70.91  | 70.10     | **81.69**    | *81.42*      | 81.34         |
| Entity Matching | Seen   | Beer              | 94.37           | **100** | **100** | 90.32  | 96.30     | **100.00**   | **100.00**   | 96.77         |
| Entity Matching | Seen   | DBLP-ACM          | **98.99**       | 96.60  | 97.44  | 95.87  | 93.80     | 98.65        | 98.77        | *98.98*       |
| Entity Matching | Seen   | DBLP-GoogleScholar| *95.70*         | 83.80  | 91.87  | 90.45  | 92.40     | 94.88        | 95.03        | **98.51**     |
| Entity Matching | Seen   | Fodors-Zagats     | **100**         | **100** | **100** | 93.62  | **100**   | **100**      | **100**      | **100**       |
| Entity Matching | Seen   | iTunes-Amazon     | 97.06           | *98.20*| **100** | 98.18  | 94.30     | 96.30        | 96.30        | 98.11         |
| Entity Matching | Unseen | Abt-Buy           | 89.33           | --     | **92.77** | 78.73  | --        | 86.06        | 88.84        | *89.58*       |
| Entity Matching | Unseen | Walmart-Amazon    | 86.89           | 87.00  | **90.27** | 79.19  | 82.40     | 84.91        | 85.24        | *89.42*       |
| Avg             |        |                   | 80.44           | -      | *84.17* | 72.58  | -         | 82.74        | 81.55        | **86.02**     |

_For GPT-3.5 and GPT-4, we used the few-shot approach on all datasets. For Jellyfish models, the few-shot approach is disabled on seen datasets and enabled on unseen datasets._   
_Accuracy as the metric for data imputation and the F1 score for other tasks._ 

1.  
  [HoloDetect](https://arxiv.org/abs/1904.02285) for Error Detection seen datasets
  [RAHA](https://dl.acm.org/doi/10.1145/3299869.3324956) for Error Detection unseen datasets
  [IPM](https://ieeexplore.ieee.org/document/9458712) for Data Imputation
  [SMAT](https://www.researchgate.net/publication/353920530_SMAT_An_Attention-Based_Deep_Learning_Solution_to_the_Automation_of_Schema_Matching) for Schema Matching
  [Ditto](https://arxiv.org/abs/2004.00584) for Entity Matching
2.  
  [Large Language Models as Data Preprocessors](https://arxiv.org/abs/2308.16361)

## Performance on unseen tasks

### Column Type Annotation

| Dataset           | RoBERTa (159 shots)<sup>1</sup> | GPT-3.5<sup>1</sup> | GPT-4  | GPT-4o | Jellyfish-7B | Jellyfish-8B | Jellyfish-13B |
|--------|-----------------|--------|--------|--------|--------------|--------------|---------------|
| SOTAB | 79.20 | 89.47 | 91.55 | 65.05 | 83 | 76.33 | 82 |

_Few-shot is disabled for Jellyfish models._   

1. Results from [Column Type Annotation using ChatGPT](https://arxiv.org/abs/2306.00745)

### Attribute Value Extraction

| Dataset |Stable Beluga 2 70B<sup>1</sup> | SOLAR 70B<sup>1</sup> | GPT-3.5<sup>1</sup> | GPT-4 <sup>1</sup>|  GPT-4o | Jellyfish-7B | Jellyfish-8B | Jellyfish-13B |
| ---- | ---- | ---- | ---- | ---- | ---- | ----| ----| ----|
| AE-110k | 52.10 | 49.20 | 61.30 | 55.50 | 55.77 | 56.09 |59.55 | 58.12 |
| OA-Mine | 50.80 | 55.20 | 62.70 | 68.90 | 60.20 | 51.98 | 59.22 | 55.96 |

_Few-shot is disabled for Jellyfish models._   

1. Results from [Product Attribute Value Extraction using Large Language Models](https://arxiv.org/abs/2310.12537)

## Prompt Template
```
### Instruction:

<prompt> (without the <>)

### Response:
```

## Training Details

### Training Data
We utilized the training and validation sets from the paper [Can Foundation Models Wrangle Your Data?](https://arxiv.org/abs/2205.09911) to fine-tune Jellyfish.
The original datasets are from [HazyResearch/fm_data_tasks](https://github.com/HazyResearch/fm_data_tasks), [RAHA](https://github.com/BigDaMa/raha), [SMAT](https://github.com/JZCS2018/SMAT), and [IPM](https://ieeexplore.ieee.org/document/9458712).
Based on these datasets, we constructed an instruction tuning dataset for fine-tuning LLMs, mirroring the style of [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca).

### Training Method

We used LoRA to speed up the training process, targeting the q_proj, k_proj, v_proj, and o_proj modules.

## Uses

To accelerate the inference, we strongly recommend running Jellyfish using [vLLM](https://github.com/vllm-project/vllm).
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Python Script
We provide two simple Python code examples for inference using the Jellyfish model.  

#### Using Transformers and Torch Modules
<div style="height: auto; max-height: 400px; overflow-y: scroll;">
  
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch

if torch.cuda.is_available():
    device = "cuda"
else:
    device = "cpu"

# Model will be automatically downloaded from HuggingFace model hub if not cached.
# Model files will be cached in "~/.cache/huggingface/hub/models--NECOUDBFM--Jellyfish/" by default.
# You can also download the model manually and replace the model name with the path to the model files.
model = AutoModelForCausalLM.from_pretrained(
    "NECOUDBFM/Jellyfish",
    torch_dtype=torch.float16,
    device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("NECOUDBFM/Jellyfish")

system_message = "You are an AI assistant that follows instruction extremely well. Help as much as you can."

# You need to define the user_message variable based on the task and the data you want to test on.
user_message = "Hello, world."

prompt = f"{system_message}\n\n### Instruction:\n\n{user_message}\n\n### Response:\n\n"
inputs = tokenizer(prompt, return_tensors="pt")
input_ids = inputs["input_ids"].to(device)

# You can modify the sampling parameters according to your needs.
generation_config = GenerationConfig(
    do_samples=True,
    temperature=0.35,
    top_p=0.9,
)

with torch.no_grad():
    generation_output = model.generate(
        input_ids=input_ids,
        generation_config=generation_config,
        return_dict_in_generate=True,
        output_scores=True,
        max_new_tokens=1024,
        pad_token_id=tokenizer.eos_token_id,
        repetition_penalty=1.15,
    )

output = generation_output[0]
response = tokenizer.decode(
    output[:, input_ids.shape[-1] :][0], skip_special_tokens=True
).strip()

print(response)

```
</div>

#### Using vLLM
<div style="height: auto; max-height: 400px; overflow-y: scroll;">
  
```python
from vllm import LLM, SamplingParams

# To use vllm for inference, you need to download the model files either using HuggingFace model hub or manually.
# You should modify the path to the model according to your local environment.
path_to_model = (
    "/workspace/models/Jellyfish"
)

model = LLM(model=path_to_model)

# You can modify the sampling parameters according to your needs.
# Caution: The stop parameter should not be changed.
sampling_params = SamplingParams(
    temperature=0.35,
    top_p=0.9,
    max_tokens=1024,
    stop=["### Instruction:"],
)

system_message = "You are an AI assistant that follows instruction extremely well. Help as much as you can."

# You need to define the user_message variable based on the task and the data you want to test on.
user_message = "Hello, world."

prompt = f"{system_message}\n\n### Instruction:\n\n{user_message}\n\n### Response:\n\n"
outputs = model.generate(prompt, sampling_params)
response = outputs[0].outputs[0].text.strip()
print(response)

```
</div>

### Prompts

We provide the prompts used for both fine-tuning and inference.
You can structure your data according to these prompts.
Moreover, we encourage experimenting with different prompts to potentially achieve optimal generation quality.

### JellyFish-13B
#### For Error Detection
_There are two forms of the error detection task.
In the first form, a complete record row is provided, and the task is to determine if a specific value is erroneous.
In the second form, only the value of a specific attribute is given, and the decision about its correctness is based solely on the attribute's name and value.
The subsequent prompt examples pertain to these two forms, respectively._
```
Your task is to determine if there is an error in the value of a specific attribute within the whole record provided.
The attributes may include {attribute 1}, {attribute 2}, ...
Errors may include, but are not limited to, spelling errors, inconsistencies, or values that don't make sense given the context of the whole record.
Record [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}, ...]
Attribute for Verification: [{attribute X}: {attribute X value}]
Question: Is there an error in the value of {attribute X}? Choose your answer from: [Yes, No].
```
```
Your task is to determine if there is an error in the value of a specific attribute.
The attributes may belong to a {keyword} record and could be one of the following: {attribute 1}, {attribute 2}, ...
Errors can include, but are not limited to, spelling errors, inconsistencies, or values that don't make sense for that attribute.  
Note: Missing values (N/A or \"nan\") are not considered errors.
Attribute for Verification: [{attribute X}: {attribute X value}]
Question: Is there an error in the value of {attribute X}? Choose your answer from: [Yes, No].
```
#### For Data Imputation
```
You are presented with a {keyword} record that is missing a specific attribute: {attribute X}.
Your task is to deduce or infer the value of {attribute X} using the available information in the record.  
You may be provided with fields like {attribute 1}, {attribute 2}, ... to help you in the inference.  
Record: [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}, ...]  
Based on the provided record, what would you infer is the value for the missing attribute {attribute X}?  
Answer only the value of {attribute X}.
```
#### For Schema Matching
```
Your task is to determine if the two attributes (columns) are semantically equivalent in the context of merging two tables.
Each attribute will be provided by its name and a brief description.
Your goal is to assess if they refer to the same information based on these names and descriptions provided.
Attribute A is [name: {value of name}, description: {value of description}].
Attribute B is [name: {value of name}, description: {value of description}].
Are Attribute A and Attribute B semantically equivalent? Choose your answer from: [Yes, No].
```
#### For Entity Matching
```
You are tasked with determining whether two records listed below are the same based on the information provided.
Carefully compare the {attribute 1}, {attribute 2}... for each record before making your decision.  
Note: Missing values (N/A or \"nan\") should not be used as a basis for your decision.  
Record A: [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}, ...]
Record B: [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}, ...]  
Are record A and record B the same entity? Choose your answer from: [Yes, No].
```

### For Column Type Annotation

We follow the prompt in [Column Type Annotation using ChatGPT](https://arxiv.org/abs/2306.00745) (text+inst+2-step).  

### For Attribute Value Extraction

We follow the prompt in [Product Attribute Value Extraction using Large Language Models](https://arxiv.org/abs/2310.12537) (textual, w/o examples).  


### JellyFish-13B-Interpreter
#### For Error Detection
```
Your task is to determine if there is an error in the value of a specific attribute within the whole record provided.
Errors may include, but are not limited to, spelling errors, inconsistencies, or values that don't make sense given the context of the whole record.
Record [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}, ...]
Attribute for verification: [{attribute X}: {attribute X value}]
Question: Is there an error in the value of {attribute X}?
After your reasoning, finish your response in a separate line with and ONLY with your final answer. Choose your final answer from [Yes, No].
```
```
Your task is to determine if there is an error in the value of a specific attribute.
The attributes may belong to a {keyword} record.
Errors can include, but are not limited to, spelling errors, inconsistencies, or values that don't make sense for that attribute.  
Note: Missing values (N/A or \"nan\") are not considered errors.
Attribute for Verification: [{attribute X}: {attribute X value}]
Question: Is there an error in the value of {attribute X}?
After your reasoning, finish your response in a separate line with and ONLY with your final answer. Choose your final answer from [Yes, No].",
```
#### For Data Imputation
```
You are presented with a {keyword} record that is missing a specific attribute {attribute X}.
Your task is to deduce or infer the manufacturer of the product using the available information in the record.
You may be provided with fields like {attribute 1}, {attribute 2}, ... to help you in the inference.
Record: [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}, ...]
Based on the provided product record, what would you infer is the value for the missing attribute {attribute X}?
After your reasoning, finish your response in a separate line with and ONLY with your final answer.
Your final answer should only consist of the value of {attribute X}.
```
#### For Schema Matching
```
Your task is to determine if the two attributes (columns) are semantically equivalent in the context of merging two tables.
Each attribute will be provided by its name and a brief description.
Your goal is to assess if they refer to the same information based on these names and descriptions provided.
Attribute A is [name: {value of name}, description: {value of description}].
Attribute B is [name: {value of name}, description: {value of description}].
After your reasoning, finish your response in a separate line with and ONLY with your final answer. Choose your final answer from [Yes, No].
```
#### For Entity Matching
```
You are tasked with determining whether two products listed below are the same based on the information provided.
Carefully examine all the attributes before making your decision.  
Note: Missing values (N/A or \"nan\") should not be used as a basis for your decision.  
Record A: [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}, ...]  
Record B: [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}, ...]  
Are record A and record B the same entity?  
After your reasoning, finish your response in a separate line with and ONLY with your final answer. Choose your final answer from [Yes, No].  
```

## Sample Responses from Jellyfish-13B-Interpreter
We provide a few sample responses from Jellyfish-13B-Interpreter to demonstrate its performance.

_For easier readability, we display the raw data record instead of the entire prompt._

1. ```
   Product A: [name: \"adobe creative suite cs3 web standard [ mac ]\", manufacturer: \"adobe\", price: \"999.0\"]
   Product B: [name: \"adobe creative suite 3 ( cs3 ) web standard 3 windows upsell from dreamweaver or flash\", price: \"799.0\"]
   
   Product A and Product B are not the same. Although both products are Adobe Creative Suite CS3 Web Standard, they are for different operating systems.
   Product A is for Mac and Product B is for Windows. This difference in operating systems indicates that the products are not the same, as they are designed for different hardware and software environments.
   Additionally, Product B mentions that it is an upsell from Dreamweaver or Flash, which further differentiates it from Product A.
   No
   ```
2. ```
   Article A: [title: \"l/mrp : a buffer management strategy for interactive continuous data flows in a multimedia dbms\", authors: \"f moser , a kraiss , w klas\", venue: \"vldb\", year: \"1995\"]
   Article B: [title: \"l/mrp : a buffer management strategy for interactive continuous data flows in a multimedia dbms\", authors: \"f moser , w klas\", venue: \"proceedings of the international conference on very large & hellip ; ,\", year: \"1995.0\"]
   
   The titles of both Article A and Article B are identical, which is a strong indicator that they are the same article.
   The authors of both articles are also the same, with the exception that Article B does not include the author \"a kraiss\". However, this could be due to a mistake in data entry or a missing author in Article B.
   The venue for both articles is also similar, with Article A listed as \"vldb\" and Article B listed as \"proceedings of the international conference on very large & hellip ; ,\" which is likely the same conference.
   The year of publication is also the same for both articles, with Article A listed as \"1995\" and Article B listed as \"1995.0\".
   Given these similarities, it is reasonable to conclude that Article A and Article B are the same.
   Yes
   ```
3. ```
   Record: [name: "le montrachet bistro", addr: "3000 paradise rd.", phone: "702-732-5651", type: "french bistro"].
   missing attribute: "City"

   The restaurant record provides the name, address, phone number, and type of restaurant. The name and type do not provide any specific information about the city. However, the address and phone number can be used to infer the city.
   The address, "3000 paradise rd.", could potentially be in many different cities. However, the phone number, "702-732-5651", provides a clue. The area code, 702, is used in the state of Nevada, specifically in the Las Vegas metropolitan area.
   Therefore, based on the area code of the phone number, it can be inferred that the restaurant is located in Las Vegas.
   Las Vegas
   ```
   

<!--
## Bias, Risks, and Limitations

 This section is meant to convey both technical and sociotechnical limitations. 
As of now, we've tested Jellyfish exclusively with the test sets from the benchmark datasets mentioned earlier. 

We're in the process of assessing its performance on additional datasets.
-->

<!--## Citation


 If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section.

```bibtex
@article{
  title = {Can Foundation Models Wrangle Your Data?},
  author = {Avanika Narayan, Ines Chami, Laurel Orr, Simran Arora, Christopher Ré},
  booktitle = {arXiv:2205.09911},
  year = {2022}  
}

@software{hunterlee2023orcaplaty1
  title = {OpenOrcaPlatypus: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset and Merged with divergent STEM and Logic Dataset Model},
  author = {Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz and Bleys Goodson and Wing Lian and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
  year = {2023},
  publisher = {HuggingFace},
  journal = {HuggingFace repository},
  howpublished = {\url{https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B},
}
@article{platypus2023,
    title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, 
    author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
    booktitle={arXiv preprint arxiv:2308.07317},
    year={2023}
}
@software{OpenOrcaxOpenChatPreview2,
  title = {OpenOrcaxOpenChatPreview2: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset},
  author = {Guan Wang and Bleys Goodson and Wing Lian and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"},
  year = {2023},
  publisher = {HuggingFace},
  journal = {HuggingFace repository},
  howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B},
}
@software{openchat,
  title = {{OpenChat: Advancing Open-source Language Models with Imperfect Data}},
  author = {Wang, Guan and Cheng, Sijie and Yu, Qiying and Liu, Changling},
  doi = {10.5281/zenodo.8105775},
  url = {https://github.com/imoneoi/openchat},
  version = {pre-release},
  year = {2023},
  month = {7},
}
@misc{mukherjee2023orca,
      title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, 
      author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
      year={2023},
      eprint={2306.02707},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
@misc{touvron2023llama,
    title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, 
    author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom},
    year={2023},
    eprint= arXiv 2307.09288
}
@misc{longpre2023flan,
      title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, 
      author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
      year={2023},
      eprint={2301.13688},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}
@article{hu2021lora,
  title={LoRA: Low-Rank Adaptation of Large Language Models},
  author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu},
  journal={CoRR},
  year={2021}
}
-->


<!--**BibTeX:**

[More Information Needed]

**APA:**

[More Information Needed]

## Glossary [optional]

[More Information Needed]

## More Information [optional]

[More Information Needed]

## Model Card Authors [optional]

[More Information Needed]

## Model Card Contact

[More Information Needed]-->