|
--- |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
tags: |
|
- ocean |
|
- text-generation-inference |
|
language: |
|
- en |
|
--- |
|
|
|
## π‘ Model description |
|
This repo contains a large language model (OceanGPT) for ocean science tasks trained with [KnowLM](https://github.com/zjunlp/KnowLM). |
|
It should be noted that the OceanGPT is constantly being updated, so the current model is not the final version. |
|
|
|
## π Intended uses |
|
You can download the model to generate responses or contact the [email]([email protected]) for the online test demo. |
|
The Chinese version of OceanGPT can be found [here](https://huggingface.co/zjunlp/OceanGPT-7b-CN). |
|
|
|
## π οΈ How to use OceanGPT |
|
We wil provide several examples soon and you can modify the input according to your needs. |
|
|
|
```python |
|
>>> from transformers import pipeline |
|
|
|
>>> pipe = pipeline("text-generation", model="zjunlp/OceanGPT-7b") |
|
|
|
>>> from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
>>> tokenizer = AutoTokenizer.from_pretrained("zjunlp/OceanGPT-7b") |
|
>>> model = AutoModelForCausalLM.from_pretrained("zjunlp/OceanGPT-7b") |
|
|
|
``` |
|
|
|
## π οΈ How to evaluate your model in OceanBench |
|
|
|
We wil provide several examples soon and you can modify the input according to your needs. |
|
|
|
Note: we are conducting the final checks on OceanBench and will be uploading it to Hugging Face soon. |
|
|
|
```python |
|
>>> from datasets import load_dataset |
|
|
|
>>> dataset = load_dataset("zjunlp/OceanBench") |
|
``` |
|
|
|
## π How to cite |
|
|
|
```bibtex |
|
@article{bi2023oceangpt, |
|
title={OceanGPT: A Large Language Model for Ocean Science Tasks}, |
|
author={Bi, Zhen and Zhang, Ningyu and Xue, Yida and Ou, Yixin and Ji, Daxiong and Zheng, Guozhou and Chen, Huajun}, |
|
journal={arXiv preprint arXiv:2310.02031}, |
|
year={2023} |
|
} |
|
``` |
|
|