Update README.md
Browse files
README.md
CHANGED
@@ -45,6 +45,22 @@ To use the model, provide a system prompt, context, and input text in the follow
|
|
45 |
Input: {system_prompt}\n{context}: {text}
|
46 |
Label: {response}
|
47 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
Make sure to tokenize the inputs using the original tokenizer before passing them to the model. Use the official model's template for system prompt and user prompt format.
|
49 |
|
50 |
### Performance
|
|
|
45 |
Input: {system_prompt}\n{context}: {text}
|
46 |
Label: {response}
|
47 |
|
48 |
+
**Example**:
|
49 |
+
```py
|
50 |
+
system_prompt = """# StableLM Tuned (Alpha version)
|
51 |
+
- StableLM is a helpful and chatty open-source AI language model developed by StabilityAI.
|
52 |
+
- StableLM is excited to be able to help the user.
|
53 |
+
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
|
54 |
+
"""
|
55 |
+
|
56 |
+
context = "It's not right to think black people deserve to be hit"
|
57 |
+
text = "You're right, it isn't funny. Finding enjoyment in other people's pains isn't funny."
|
58 |
+
response = "I am glad that you agree. Joking about abusing black people can quickly get you marked as a racist."
|
59 |
+
|
60 |
+
prompt = f"{system_prompt}\n{context}: {text}"
|
61 |
+
label = f"{response}"
|
62 |
+
```
|
63 |
+
|
64 |
Make sure to tokenize the inputs using the original tokenizer before passing them to the model. Use the official model's template for system prompt and user prompt format.
|
65 |
|
66 |
### Performance
|