license: cc-by-nc-4.0
language:
- en
Jellyfish-13B
Model Details
Jellyfish-13B is a large language model equipped with 13 billion parameters. It's tailored specifically for data preprocessing tasks, including entity matching, data imputation, error detection, and schema matching.
We fine-tuned the Open-Orca/OpenOrca-Platypus2-13B model using the datasets pertinent to data preprocessing tasks. Its performance is competitive, rivaling previous state-of-the-art algorithms and LLMs such as OpenAI's GPT 3.5 and GPT 4 (as demonstrated in our earlier studies). It is notable that, as a 13B model, Jellyfish allows for cost-effective local execution without compromising data security.
We release two distinct versions of Jellyfish: Jellyfish-13B (the main branch) and Jellyfish-13B-Interpreter. As the names suggest, Jellyfish-13B is tailored to deliver precise, straightforward answers. In contrast, Jellyfish-13B-Interpreter, is fine-tuned with data that includes reasoning and sequential thought processes for handling data preprocessing tasks, distilling knowledge from GPT-4.
The two versions are designed for different application scenarios. Jellyfish-13B is suitable for integration into larger data management systems due to its simple and clear responses that can be easily transformed into code. On the other hand, Jellyfish-13B-Interpreter is more user-oriented, with responses that provide them with in-depth data insights without the necessity for advanced coding skills or an intricate grasp of statistics.
More details about the model can be found in the Jellyfish paper.
- Developed by: Haochen Zhang, Yuyang Dong, Chuan Xiao, Masafumi Oyamada
- Contact: [email protected]
- Funded by: NEC Corporation, Osaka University
- Language(s) (NLP): English
- License: Non-Commercial Creative Commons license (CC BY-NC-4.0)
- Finetuned from model: Open-Orca/OpenOrca-Platypus2-13B
Performance on seen tasks
Task | Type | Dataset | Non-LLM SoTA1 | GPT-3.52 | GPT-42 | Jellyfish-13B-1.13 | Jellyfish-13B-Interpreter |
---|---|---|---|---|---|---|---|
Entity Matching | Seen | Fodors-Zagats | 100 | 100 | 100 | 100 | 100 |
Entity Matching | Seen | Beer | 94.37 | 96.30 | 100 | 96.77 | 100 |
Entity Matching | Seen | iTunes-Amazon | 97.06 | 96.43 | 100 | 98.11 | 96.15 |
Entity Matching | Seen | DBLP-ACM | 98.99 | 96.99 | 97.44 | 98.98 | 95.74 |
Entity Matching | Seen | DBLP-GoogleScholar | 95.60 | 76.12 | 91.87 | 98.51 | 89.45 |
Entity Matching | Seen | Amazon-Google | 75.58 | 66.53 | 74.21 | 81.34 | 56.64 |
Entity Matching | Unseen | Walmart-Amazon | 86.76 | 86.17 | 90.27 | 89.42 | 85.16 |
Entity Matching | Unseen | Abt-Buy | 89.33 | -- | 92.77 | 89.58 | -- |
Data Imputation | Seen | Restaurant | 77.20 | 94.19 | 97.67 | 94.19 | 93.02 |
Data Imputation | Seen | Buy | 96.50 | 98.46 | 100 | 100 | 100 |
Data Imputation | Unseen | Filpkart | 68.00 | -- | 89.94 | 81.68 | -- |
Data Imputation | Unseen | Phone | 86.70 | -- | 90.79 | 87.21 | -- |
Error Detection | Seen | Hosptial | 94.40 | 90.74 | 90.74 | 95.59 | 65.66 |
Error Detection | Seen | Adult | 99.10 | 92.01 | 92.01 | 99.33 | 90.13 |
Error Detection | Unseen | Flights | 81.00 | -- | 83.48 | 82.52 | -- |
Error Detection | Unseen | Rayyan | 79.00 | -- | 81.95 | 90.65 | -- |
Schema Matching | Seen | Sythea | 38.50 | 57.14 | 66.67 | 36.36 | 30.77 |
Schema Matching | Seen | MIMIC | 20.00 | -- | 40.00 | 40.00 | -- |
Schema Matching | Unseen | CMS | 50.00 | -- | 19.35 | 59.29 | -- |
Few-shot is disabled for Jellyfish-13B on seen datasets and enabled on unseen datasets.
Accuracy as the metric for data imputation and the F1 score for other tasks.
For GPT-3.5, GPT-4 we used the few-shot approach, while for Jellyfish and Jellyfish-Interpreter, the zero-shot approach was employed.
- Ditto for Entity Matching
SMAT for Schema Matching
HoloDetect for Error Detection seen datasets
RAHA for Error Detection unseen datasets
IPM for Data Imputation - Large Language Models as Data Preprocessors
- We have updated the main branch with Jellyfish-13B version 1.1 .
Performance on unseen tasks
Column Type Annotation
Dataset | RoBERTa (159 shots)1 | GPT-3.51 | GPT-4 | Jellfish-13B-1.1 |
---|---|---|---|---|
SOTAB | 79.20 | 89.47 | 91.55 | 82.00 |
Few-shot is disabled for Jellyfish-13B.
- Results from Column Type Annotation using ChatGPT
Attribute Value Extraction
Dataset | Stable Beluga 2 70B1 | SOLAR 70B1 | GPT-3.51 | GPT-4 1 | Jellfish-13B-1.1 |
---|---|---|---|---|---|
AE-110k | 52.10 | 49.20 | 61.30 | 55.50 | 58.12 |
OA-Mine | 50.80 | 55.20 | 62.70 | 68.90 | 55.96 |
Few-shot is disabled for Jellyfish-13B.
Prompt Template
### Instruction:
<prompt> (without the <>)
### Response:
Training Details
Training Data
We utilized the training and validation sets from the paper Can Foundation Models Wrangle Your Data? to fine-tune Jellyfish. The original datasets are from HazyResearch/fm_data_tasks, RAHA, SMAT, and IPM. We revised this data and constructed an instruction tuning dataset suitable for fine-tuning LLM, mirroring the style of OpenOrca dataset.
Training Method
We used LoRA to speed up the training process, targeting the q_proj and v_proj modules.
Uses
For improved practical inference speed, we strongly recommend running Jellyfish using vLLM.
We provide the prompts used for both the model's fine-tuning and inference. You can structure your data according to these prompts. However, we encourage experimenting with different prompts to potentially achieve optimal generation quality.
JellyFish-13B
For Entity Matching
You are tasked with determining whether two records listed below are the same based on the information provided.
Carefully compare the {attribute 1}, {attribute 2}... for each record before making your decision.
Note: Missing values (N/A or \"nan\") should not be used as a basis for your decision.
Record A: [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}, ...]
Record B: [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}, ...]
Are record A and record B the same entity? Choose your answer from: [Yes, No].
For Data Imputation
You are presented with a {keyword} record that is missing a specific attribute: {attribute X}.
Your task is to deduce or infer the value of {attribute X} using the available information in the record.
You may be provided with fields like {attribute 1}, {attribute 2}, ... to help you in the inference.
Record: [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}, ...]
Based on the provided record, what would you infer is the value for the missing attribute {attribute X}?
Answer only the value of {attribute X}.
For Error Detection
There are two forms of the error detection task. In the first form, a complete record row is provided, and the task is to determine if a specific value is erroneous. In the second form, only the value of a specific attribute is given, and the decision about its correctness is based solely on the attribute's name and value. The subsequent prompt examples pertain to these two forms, respectively.
Your task is to determine if there is an error in the value of a specific attribute within the whole record provided.
The attributes may include {attribute 1}, {attribute 2}, ...
Errors may include, but are not limited to, spelling errors, inconsistencies, or values that don't make sense given the context of the whole record.
Record [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}, ...]
Attribute for Verification: [{attribute X}: {attribute X value}]
Question: Is there an error in the value of {attribute X}? Choose your answer from: [Yes, No].
Your task is to determine if there is an error in the value of a specific attribute.
The attributes may belong to a {keyword} record and could be one of the following: {attribute 1}, {attribute 2}, ...
Errors can include, but are not limited to, spelling errors, inconsistencies, or values that don't make sense for that attribute.
Note: Missing values (N/A or \"nan\") are not considered errors.
Attribute for Verification: [{attribute X}: {attribute X value}]
Question: Is there an error in the value of {attribute X}? Choose your answer from: [Yes, No].
For Schema Matching
Your task is to determine if the two attributes (columns) are semantically equivalent in the context of merging two tables.
Each attribute will be provided by its name and a brief description.
Your goal is to assess if they refer to the same information based on these names and descriptions provided.
Attribute A is [name: {value of name}, description: {value of description}].
Attribute B is [name: {value of name}, description: {value of description}].
Are Attribute A and Attribute B semantically equivalent? Choose your answer from: [Yes, No].
For Column Type Annotation
We follow the prompt in Column Type Annotation using ChatGPT (text+inst+2-step).
For Attribute Value Extraction
We follow the prompt in Product Attribute Value Extraction using Large Language Models (textual, w/o examples).
JellyFish-13B-Interpreter
For Entity Matching
You are tasked with determining whether two products listed below are the same based on the information provided.
Carefully examine all the attributes before making your decision.
Note: Missing values (N/A or \"nan\") should not be used as a basis for your decision.
Record A: [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}, ...]
Record B: [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}, ...]
Are record A and record B the same entity?
After your reasoning, finish your response in a separate line with and ONLY with your final answer. Choose your final answer from [Yes, No].
For Data Imputation
You are presented with a {keyword} record that is missing a specific attribute {attribute X}.
Your task is to deduce or infer the manufacturer of the product using the available information in the record.
You may be provided with fields like {attribute 1}, {attribute 2}, ... to help you in the inference.
Record: [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}, ...]
Based on the provided product record, what would you infer is the value for the missing attribute {attribute X}?
After your reasoning, finish your response in a separate line with and ONLY with your final answer.
Your final answer should only consist of the value of {attribute X}.
For Error Detection
Your task is to determine if there is an error in the value of a specific attribute within the whole record provided.
Errors may include, but are not limited to, spelling errors, inconsistencies, or values that don't make sense given the context of the whole record.
Record [{attribute 1}: {attribute 1 value}, {attribute 2}: {attribute 2 value}, ...]
Attribute for verification: [{attribute X}: {attribute X value}]
Question: Is there an error in the value of {attribute X}?
After your reasoning, finish your response in a separate line with and ONLY with your final answer. Choose your final answer from [Yes, No].
Your task is to determine if there is an error in the value of a specific attribute.
The attributes may belong to a {keyword} record.
Errors can include, but are not limited to, spelling errors, inconsistencies, or values that don't make sense for that attribute.
Note: Missing values (N/A or \"nan\") are not considered errors.
Attribute for Verification: [{attribute X}: {attribute X value}]
Question: Is there an error in the value of {attribute X}?
After your reasoning, finish your response in a separate line with and ONLY with your final answer. Choose your final answer from [Yes, No].",
For Schema Matching
Your task is to determine if the two attributes (columns) are semantically equivalent in the context of merging two tables.
Each attribute will be provided by its name and a brief description.
Your goal is to assess if they refer to the same information based on these names and descriptions provided.
Attribute A is [name: {value of name}, description: {value of description}].
Attribute B is [name: {value of name}, description: {value of description}].
After your reasoning, finish your response in a separate line with and ONLY with your final answer. Choose your final answer from [Yes, No].
Sample Responses from Jellyfish-13B-Interpreter
We provide a few sample responses from Jellyfish-13B-Interpreter to demonstrate its performance.
For easier readability, we display the raw data record instead of the entire prompt.
Product A: [name: \"adobe creative suite cs3 web standard [ mac ]\", manufacturer: \"adobe\", price: \"999.0\"] Product B: [name: \"adobe creative suite 3 ( cs3 ) web standard 3 windows upsell from dreamweaver or flash\", price: \"799.0\"] Product A and Product B are not the same. Although both products are Adobe Creative Suite CS3 Web Standard, they are for different operating systems. Product A is for Mac and Product B is for Windows. This difference in operating systems indicates that the products are not the same, as they are designed for different hardware and software environments. Additionally, Product B mentions that it is an upsell from Dreamweaver or Flash, which further differentiates it from Product A. No
Article A: [title: \"l/mrp : a buffer management strategy for interactive continuous data flows in a multimedia dbms\", authors: \"f moser , a kraiss , w klas\", venue: \"vldb\", year: \"1995\"] Article B: [title: \"l/mrp : a buffer management strategy for interactive continuous data flows in a multimedia dbms\", authors: \"f moser , w klas\", venue: \"proceedings of the international conference on very large & hellip ; ,\", year: \"1995.0\"] The titles of both Article A and Article B are identical, which is a strong indicator that they are the same article. The authors of both articles are also the same, with the exception that Article B does not include the author \"a kraiss\". However, this could be due to a mistake in data entry or a missing author in Article B. The venue for both articles is also similar, with Article A listed as \"vldb\" and Article B listed as \"proceedings of the international conference on very large & hellip ; ,\" which is likely the same conference. The year of publication is also the same for both articles, with Article A listed as \"1995\" and Article B listed as \"1995.0\". Given these similarities, it is reasonable to conclude that Article A and Article B are the same. Yes
Record: [name: "le montrachet bistro", addr: "3000 paradise rd.", phone: "702-732-5651", type: "french bistro"]. missing attribute: "City" The restaurant record provides the name, address, phone number, and type of restaurant. The name and type do not provide any specific information about the city. However, the address and phone number can be used to infer the city. The address, "3000 paradise rd.", could potentially be in many different cities. However, the phone number, "702-732-5651", provides a clue. The area code, 702, is used in the state of Nevada, specifically in the Las Vegas metropolitan area. Therefore, based on the area code of the phone number, it can be inferred that the restaurant is located in Las Vegas. Las Vegas