Update README.md
#1
by
chuanxiao1983
- opened
README.md
CHANGED
@@ -37,6 +37,18 @@ More details about the model can be found in the [Jellyfish paper](https://arxiv
|
|
37 |
- **License:** Non-Commercial Creative Commons license (CC BY-NC-4.0)
|
38 |
- **Finetuned from model:** [Open-Orca/OpenOrca-Platypus2-13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)
|
39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
## Performance on seen tasks
|
41 |
|
42 |
| Task | Type | Dataset | Non-LLM SoTA<sup>1</sup> | GPT-3.5<sup>2</sup> | GPT-4<sup>2</sup> | Jellyfish-13B-1.1<sup>3</sup>| Jellyfish-13B-Interpreter |
|
|
|
37 |
- **License:** Non-Commercial Creative Commons license (CC BY-NC-4.0)
|
38 |
- **Finetuned from model:** [Open-Orca/OpenOrca-Platypus2-13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B)
|
39 |
|
40 |
+
## Citation
|
41 |
+
If you find our work useful, please give us credit by citing:
|
42 |
+
|
43 |
+
```
|
44 |
+
@article{zhang2023jellyfish,
|
45 |
+
title={Jellyfish: A Large Language Model for Data Preprocessing},
|
46 |
+
author={Zhang, Haochen and Dong, Yuyang and Xiao, Chuan and Oyamada, Masafumi},
|
47 |
+
journal={arXiv preprint arXiv:2312.01678},
|
48 |
+
year={2023}
|
49 |
+
}
|
50 |
+
```
|
51 |
+
|
52 |
## Performance on seen tasks
|
53 |
|
54 |
| Task | Type | Dataset | Non-LLM SoTA<sup>1</sup> | GPT-3.5<sup>2</sup> | GPT-4<sup>2</sup> | Jellyfish-13B-1.1<sup>3</sup>| Jellyfish-13B-Interpreter |
|