|
--- |
|
base_model: apple/DCLM-7B |
|
datasets: |
|
- HuggingFaceH4/ultrachat_200k |
|
- teknium/OpenHermes-2.5 |
|
- princeton-nlp/gemma2-ultrafeedback-armorm |
|
license: apple-ascl |
|
tags: |
|
- text |
|
--- |
|
|
|
# DCLM-7B-Chat |
|
|
|
This is a fine-tuned version of the DCLM-7B baseline model trained for chat |
|
completions. |
|
|
|
## Quick start |
|
|
|
To use the model, `open_lm` must first be installed: |
|
```shell |
|
pip install git+https://github.com/mlfoundations/open_lm.git |
|
``` |
|
|
|
Then simply load the model and generate responses: |
|
```python |
|
from open_lm.hf import * |
|
from transformers import ( |
|
AutoModelForCausalLM, |
|
AutoTokenizer, |
|
) |
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained("mathewhe/DCLM-7B-Chat") |
|
tokenizer = AutoTokenizer.from_pretrained("mathewhe/DCLM-7B-Chat") |
|
|
|
messages = [ |
|
{"role": "user", "content": "What is an LLM?"}, |
|
] |
|
|
|
inputs = tokenizer.apply_chat_template(messages) |
|
|
|
print(tokenizer.decode(model.generate(**inputs)[0])) |
|
``` |
|
|
|
Alternatively, copy the included `chat_class.py` module into your local |
|
directory and just import the `Chat` class: |
|
``` |
|
from chat_class import Chat |
|
chat = Chat() # default args: Chat("mathewhe/DCLM-7B-Chat", device="cuda") |
|
|
|
# for one-off instructions |
|
instruction = "Write a list of ingredients for banana pudding." |
|
print(chat.instruct(instruction)) |
|
|
|
# for multi-turn chat |
|
response1 = chat.message("Who was Stan Lee?") |
|
response2 = chat.message("What was his wife's name?") |
|
|
|
# to reset the chat |
|
chat.reset() |
|
``` |
|
|
|
## Chat template |
|
|
|
This model uses the following chat template and does not support a separate |
|
system prompt: |
|
``` |
|
<|endoftext|>[INST] <user-message> [/INST][ASST] <llm-response> [/ASST]<|endoftext|> |
|
``` |
|
|
|
The included tokenizer will correctly format messages, so you should not have |
|
to manually format the input text. |
|
|
|
Instead, use the tokenizer's `apply_chat_template()` method on a list of |
|
messages. |
|
Each message should be a dict with two keys: |
|
- "role": Either "user" or "assistant". |
|
- "content": The message to include. |
|
|
|
For example: |
|
```python |
|
from transformers import AutoTokenizer |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("mathewhe/DCLM-7B-Chat") |
|
|
|
messages = [ |
|
{"role": "user", "content": "Solve for x: 3x=4"}, |
|
{"role": "assistant", "content": "3x=4\n(3x)/3=(4)/3\nx=4/3"}, |
|
{"role": "user", "content": "Please explain your work."}, |
|
] |
|
print(tokenizer.apply_chat_template(messages, tokenize=False) |
|
``` |
|
outputs |
|
``` |
|
<|endoftext|>[INST] Solve for x: 3x=4 [/INST][ASST] 3x=4 |
|
(3x)/3=(4)/3 |
|
x=4/3 [/ASST]<|endoftext|><|endoftext|>[INST] Please explain your work [/INST] |
|
``` |
|
|
|
See the example code in the included `chat_class.py` module for more details. |
|
|