Text Generation
Transformers
Safetensors
English
llama
conversational
text-generation-inference
Inference Endpoints
Houxing commited on
Commit
e939100
1 Parent(s): 436ce8d

add chat template

Browse files
Files changed (2) hide show
  1. README.md +17 -4
  2. tokenizer_config.json +2 -1
README.md CHANGED
@@ -45,12 +45,25 @@ ReflectionCoder is a novel approach that effectively leverages reflection sequen
45
  Following chat templates of most models, we use two special tokens to wrap the message of user and assistant, *i.e.*, ``<|user|>``, ``<|assistant|>``, and ``<|endofmessage|>``. Furthermore, we use two special tokens to wrap the content of different blocks, *i.e.*, ``<|text|>`` and ``<|endofblock|>``. You can use the following template to prompt our ReflectionCoder.
46
 
47
  ```python
48
- <|user|><|text|>
49
- Your Instruction
50
- <|endofblock|><|endofmessage|><|assistant|>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
  ```
52
 
53
- #### Inference Code
54
  Please refer to our [GitHub Repo](https://github.com/SenseLLM/ReflectionCoder) for more technical details.
55
 
56
  ## Citation
 
45
  Following chat templates of most models, we use two special tokens to wrap the message of user and assistant, *i.e.*, ``<|user|>``, ``<|assistant|>``, and ``<|endofmessage|>``. Furthermore, we use two special tokens to wrap the content of different blocks, *i.e.*, ``<|text|>`` and ``<|endofblock|>``. You can use the following template to prompt our ReflectionCoder.
46
 
47
  ```python
48
+ import torch
49
+ from transformers import pipeline
50
+
51
+ chat = [
52
+ {"role": "user", "content": "<Your code instruction here>"}
53
+ ]
54
+
55
+ generator = pipeline(
56
+ model="SenseLLM/ReflectionCoder-CL-34B",
57
+ task="text-generation",
58
+ torch_dtype=torch.bfloat16,
59
+ device_map="auto",
60
+ )
61
+
62
+ result = generator(chat, max_length=128, num_return_sequences=1)
63
+
64
+ print(result)
65
  ```
66
 
 
67
  Please refer to our [GitHub Repo](https://github.com/SenseLLM/ReflectionCoder) for more technical details.
68
 
69
  ## Citation
tokenizer_config.json CHANGED
@@ -142,5 +142,6 @@
142
  "suffix_token": "▁<SUF>",
143
  "tokenizer_class": "CodeLlamaTokenizer",
144
  "unk_token": "<unk>",
145
- "use_default_system_prompt": false
 
146
  }
 
142
  "suffix_token": "▁<SUF>",
143
  "tokenizer_class": "CodeLlamaTokenizer",
144
  "unk_token": "<unk>",
145
+ "use_default_system_prompt": false,
146
+ "chat_template": "{% for message in messages %}{% if message['role'] == 'user' %}{{ '<|user|>' }}{% elif message['role'] == 'system' %}{{ '<|system|>' }}{% elif message['role'] == 'assistant' %}{{ '<|assistant|>' }}{% endif %}{% if message['content'] is string %}{{ '<|text|>' + message['content'] + '<|endofblock|>' }}{% elif message['content'] is sequence %}{% for block in message['content'] %}{% if block['type'] == 'text' %}{{ '<|text|>' }}{% elif block['type'] == 'code' %}{{ '<|code|>' }}{% elif block['type'] == 'execution' %}{{ '<|execution|>' }}{% endif %}{{ block['content'] + '<|endofblock|>' }}{% endfor %}{% endif %}{{ '<|endofmessage|>' }}{% endfor %}{% if add_generation_prompt %}{{ '<|assistant|>' }}{% endif %}"
147
  }