Text Generation
Transformers
PyTorch
English
llama
text-generation-inference
Inference Endpoints
hamishivi commited on
Commit
071a3f2
1 Parent(s): 78c8eed

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -0
README.md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ model-index:
3
+ - name: open-instruct-llama2-sharegpt-dpo-7b
4
+ results: []
5
+ datasets:
6
+ - anon8231489123/ShareGPT_Vicuna_unfiltered
7
+ language:
8
+ - en
9
+ base_model: meta-llama/Llama-2-7b-hf
10
+ ---
11
+
12
+ <img src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/tulu-v2/Tulu%20V2%20banner.png" alt="TuluV2 banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
13
+
14
+
15
+ # Model Card for Open Instruct ShareGPT DPO Llama2 7B
16
+
17
+ This model belongs to the Tulu series of models, which is a series of language models that are trained to act as helpful assistants.
18
+ Open Instruct ShareGPT Llama2 7B is a fine-tuned version of Llama 2 that was trained on the [ShareGPT dataset](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered).
19
+ Please check out our paper [TODO] for more!
20
+
21
+
22
+ ## Model description
23
+
24
+ - **Model type:** A model belonging to a suite of instruction and RLHF tuned chat models on a mix of publicly available, synthetic and human-created datasets.
25
+ - **Language(s) (NLP):** Primarily English
26
+ - **License:** [AI2 ImpACT](https://allenai.org/impact-license) Low-risk license.
27
+ - **Finetuned from model:** [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
28
+
29
+ ### Model Sources
30
+
31
+ - **Repository:** https://github.com/allenai/https://github.com/allenai/open-instruct
32
+ - **Model Family:** Other models and the dataset are found in the [Tulu V2 collection](https://huggingface.co/collections/allenai/tulu-v2-suite-6551b56e743e6349aab45101).
33
+
34
+ ## Intended uses & limitations
35
+
36
+ The model was fine-tuned on a filtered and preprocessed of the [Tulu V2 mix dataset](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs.
37
+ <!--We then further aligned the model with a [Jax DPO trainer](https://github.com/hamishivi/EasyLM/blob/main/EasyLM/models/llama/llama_train_dpo.py) built on [EasyLM](https://github.com/young-geng/EasyLM) on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contains 64k prompts and model completions that are ranked by GPT-4.
38
+
39
+
40
+ <!-- You can find the datasets used for training Tulu V2 [here]()
41
+
42
+ Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
43
+
44
+ ```python
45
+ # Install transformers from source - only needed for versions <= v4.34
46
+ # pip install git+https://github.com/huggingface/transformers.git
47
+ # pip install accelerate
48
+
49
+ import torch
50
+ from transformers import pipeline
51
+
52
+ pipe = pipeline("text-generation", model="HuggingFaceH4/tulu-2-dpo-70b", torch_dtype=torch.bfloat16, device_map="auto")
53
+
54
+ # We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
55
+ messages = [
56
+ {
57
+ "role": "system",
58
+ "content": "You are a friendly chatbot who always responds in the style of a pirate",
59
+ },
60
+ {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
61
+ ]
62
+ prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
63
+ outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
64
+ print(outputs[0]["generated_text"])
65
+ # <|system|>
66
+ # You are a friendly chatbot who always responds in the style of a pirate.</s>
67
+ # <|user|>
68
+ # How many helicopters can a human eat in one sitting?</s>
69
+ # <|assistant|>
70
+ # Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food!
71
+ ```-->
72
+
73
+ ## Bias, Risks, and Limitations
74
+
75
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
76
+
77
+ The Tulu models have not been aligned to generate safe completions within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
78
+ It is also unknown what the size and composition of the corpus was used to train the base Llama 2 models, however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this.
79
+
80
+
81
+ ### Training hyperparameters
82
+
83
+ The following hyperparameters were used during DPO training:
84
+ - learning_rate: 5e-07
85
+ - total_train_batch_size: 32
86
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
87
+ - lr_scheduler_type: linear
88
+ - lr_scheduler_warmup_ratio: 0.1
89
+ - num_epochs: 3.0
90
+
91
+ ## Citation
92
+
93
+ If you find this model is useful in your work, please cite it with:
94
+
95
+ ```
96
+ @misc{ivison2023changing,
97
+ title={Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2},
98
+ author={Hamish Ivison and Yizhong Wang and Valentina Pyatkin and Nathan Lambert and Matthew Peters and Pradeep Dasigi and Joel Jang and David Wadden and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
99
+ year={2023},
100
+ archivePrefix={arXiv},
101
+ primaryClass={cs.CL}
102
+ }
103
+ ```
104
+
105
+ *Model card adapted from [Zephyr Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta/blob/main/README.md)*