Safetensors
openlm
text
File size: 2,578 Bytes
d91e224
 
 
 
 
 
 
 
 
 
 
3b9d7a6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
db3ec39
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3b9d7a6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16d463a
 
 
 
3b9d7a6
 
 
 
 
16d463a
 
 
 
 
 
 
3b9d7a6
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
---
base_model: apple/DCLM-7B
datasets:
- HuggingFaceH4/ultrachat_200k
- teknium/OpenHermes-2.5
- princeton-nlp/gemma2-ultrafeedback-armorm
license: apple-ascl
tags:
- text
---

# DCLM-7B-Chat

This is a fine-tuned version of the DCLM-7B baseline model trained for chat
completions.

## Quick start

To use the model, `open_lm` must first be installed:
```shell
pip install git+https://github.com/mlfoundations/open_lm.git
```

Then simply load the model and generate responses:
```python
from open_lm.hf import *
from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer,
)


model = AutoModelForCausalLM.from_pretrained("mathewhe/DCLM-7B-Chat")
tokenizer = AutoTokenizer.from_pretrained("mathewhe/DCLM-7B-Chat")

messages = [
    {"role": "user", "content": "What is an LLM?"},
]

inputs = tokenizer.apply_chat_template(messages)

print(tokenizer.decode(model.generate(**inputs)[0]))
```

Alternatively, copy the included `chat_class.py` module into your local
directory and just import the `Chat` class:
```
from chat_class import Chat
chat = Chat()  # default args: Chat("mathewhe/DCLM-7B-Chat", device="cuda")

# for one-off instructions
instruction = "Write a list of ingredients for banana pudding."
print(chat.instruct(instruction))

# for multi-turn chat
response1 = chat.message("Who was Stan Lee?")
response2 = chat.message("What was his wife's name?")

# to reset the chat
chat.reset()
```

## Chat template

This model uses the following chat template and does not support a separate
system prompt:
```
<|endoftext|>[INST] <user-message> [/INST][ASST] <llm-response> [/ASST]<|endoftext|>
```

The included tokenizer will correctly format messages, so you should not have
to manually format the input text.

Instead, use the tokenizer's `apply_chat_template()` method on a list of
messages.
Each message should be a dict with two keys:
- "role": Either "user" or "assistant".
- "content": The message to include.

For example:
```python
from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("mathewhe/DCLM-7B-Chat")

messages = [
    {"role": "user", "content": "Solve for x: 3x=4"},
    {"role": "assistant", "content": "3x=4\n(3x)/3=(4)/3\nx=4/3"},
    {"role": "user", "content": "Please explain your work."},
]
print(tokenizer.apply_chat_template(messages, tokenize=False)
```
outputs
```
<|endoftext|>[INST] Solve for x: 3x=4 [/INST][ASST] 3x=4
(3x)/3=(4)/3
x=4/3 [/ASST]<|endoftext|><|endoftext|>[INST] Please explain your work [/INST]
```

See the example code in the included `chat_class.py` module for more details.