Xenova HF staff commited on
Commit
feba175
1 Parent(s): f71788f

Only print generated text (#4)

Browse files

- Only print generated text (c1ae4ba1dda09943e51fe57d719fc86164f558c4)

Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -69,7 +69,7 @@ inputs = tokenizer.apply_chat_template(
69
  ).to("cuda")
70
 
71
  outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
72
- print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
73
  ```
74
 
75
  ### AutoAWQ
@@ -109,7 +109,7 @@ inputs = tokenizer.apply_chat_template(
109
  ).to("cuda")
110
 
111
  outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
112
- print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
113
  ```
114
 
115
  The AutoAWQ script has been adapted from [`AutoAWQ/examples/generate.py`](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/generate.py).
 
69
  ).to("cuda")
70
 
71
  outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
72
+ print(tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:], skip_special_tokens=True)[0])
73
  ```
74
 
75
  ### AutoAWQ
 
109
  ).to("cuda")
110
 
111
  outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
112
+ print(tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:], skip_special_tokens=True)[0])
113
  ```
114
 
115
  The AutoAWQ script has been adapted from [`AutoAWQ/examples/generate.py`](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/generate.py).