File size: 7,050 Bytes
4064db5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 |
---
language:
- en
library_name: transformers
license: apache-2.0
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
thumbnail: >-
https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
datasets:
- HuggingFaceH4/ultrafeedback_binarized
- Intel/orca_dpo_pairs
- argilla/distilabel-math-preference-dpo
- Open-Orca/OpenOrca
- OpenAssistant/oasst2
- HuggingFaceH4/ultrachat_200k
- meta-math/MetaMathQA
widget:
- text: "<|prompt|>Why is drinking water so healthy?</s><|answer|>"
---
# Model Card
## Summary
h2o-danube-1.8b-chat is an chat fine-tuned model by H2O.ai with 1.8 billion parameters. We release three versions of this model:
| Model Name | Description |
|:-----------------------------------------------------------------------------------|:----------------|
| [h2oai/h2o-danube-1.8b-base](https://huggingface.co/h2oai/h2o-danube-1.8b-base) | Base model |
| [h2oai/h2o-danube-1.8b-sft](https://huggingface.co/h2oai/h2o-danube-1.8b-sft) | SFT tuned |
| [h2oai/h2o-danube-1.8b-chat](https://huggingface.co/h2oai/h2o-danube-1.8b-chat) | SFT + DPO tuned |
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
## Model Architecture
We adjust the Llama 2 architecture for a total of around 1.8b parameters. We use the original Llama 2 tokenizer with a vocabulary size of 32,000 and train our model up to a context length of 16,384. We incorporate the sliding window attention from mistral with a size of 4,096.
The details of the model architecture are:
| Hyperparameter | Value |
|:----------------|:-------|
| n_layers | 24 |
| n_heads | 32 |
| n_query_groups | 8 |
| n_embd | 2560 |
| vocab size | 32000 |
| sequence length | 16384 |
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
```bash
pip install transformers==4.36.1
```
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="h2oai/h2o-danube-1.8b-chat",
torch_dtype=torch.bfloat16,
device_map="auto",
)
# We use the HF Tokenizer chat template to format each message
# https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{"role": "user", "content": "Why is drinking water so healthy?"},
]
prompt = pipe.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
res = pipe(
prompt,
max_new_tokens=256,
)
print(res[0]["generated_text"])
# <|prompt|>Why is drinking water so healthy?</s><|answer|> Drinking water is healthy for several reasons: [...]
```
## Benchmarks
Commonsense, world-knowledge and reading comprehension tested in 0-shot:
| Benchmark | acc_n |
|:--------------|:--------:|
| ARC-easy | 67.51 |
| ARC-challenge | 39.25 |
| BoolQ | 77.89 |
| Hellaswag | 67.60 |
| OpenBookQA | 39.20 |
| PiQA | 76.71 |
| TriviaQA | 36.29 |
| Winogrande | 65.35 |
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
MistralForCausalLM(
(model): MistralModel(
(embed_tokens): Embedding(32000, 2560, padding_idx=0)
(layers): ModuleList(
(0-23): 24 x MistralDecoderLayer(
(self_attn): MistralAttention(
(q_proj): Linear(in_features=2560, out_features=2560, bias=False)
(k_proj): Linear(in_features=2560, out_features=640, bias=False)
(v_proj): Linear(in_features=2560, out_features=640, bias=False)
(o_proj): Linear(in_features=2560, out_features=2560, bias=False)
(rotary_emb): MistralRotaryEmbedding()
)
(mlp): MistralMLP(
(gate_proj): Linear(in_features=2560, out_features=6912, bias=False)
(up_proj): Linear(in_features=2560, out_features=6912, bias=False)
(down_proj): Linear(in_features=6912, out_features=2560, bias=False)
(act_fn): SiLU()
)
(input_layernorm): MistralRMSNorm()
(post_attention_layernorm): MistralRMSNorm()
)
)
(norm): MistralRMSNorm()
)
(lm_head): Linear(in_features=2560, out_features=32000, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it. |