ilu000 commited on
Commit
1bb5c8a
1 Parent(s): 0f283d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -36
README.md CHANGED
@@ -33,42 +33,7 @@ pip install torch==2.0.0
33
  pip install einops==0.6.1
34
  ```
35
 
36
- ```python
37
- import torch
38
- from transformers import pipeline
39
-
40
- generate_text = pipeline(
41
- model="h2oai/h2ogpt-gm-oasst1-multilang-2048-falcon-7b",
42
- torch_dtype=torch.bfloat16,
43
- trust_remote_code=True,
44
- use_fast=False,
45
- device_map={"": "cuda:0"},
46
- )
47
-
48
- res = generate_text(
49
- "Why is drinking water so healthy?",
50
- min_new_tokens=2,
51
- max_new_tokens=1024,
52
- do_sample=False,
53
- num_beams=1,
54
- temperature=float(0.3),
55
- repetition_penalty=float(1.2),
56
- renormalize_logits=True
57
- )
58
- print(res[0]["generated_text"])
59
- ```
60
-
61
- You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
62
-
63
- ```python
64
- print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
65
- ```
66
-
67
- ```bash
68
- <|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
69
- ```
70
-
71
- Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
72
 
73
 
74
  ```python
@@ -103,6 +68,15 @@ res = generate_text(
103
  print(res[0]["generated_text"])
104
  ```
105
 
 
 
 
 
 
 
 
 
 
106
 
107
  You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
108
 
 
33
  pip install einops==0.6.1
34
  ```
35
 
36
+ Download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
 
39
  ```python
 
68
  print(res[0]["generated_text"])
69
  ```
70
 
71
+ You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
72
+
73
+ ```python
74
+ print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
75
+ ```
76
+
77
+ ```bash
78
+ <|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
79
+ ```
80
 
81
  You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
82