RealFalconsAI commited on
Commit
c2a4319
1 Parent(s): 874dece

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -5
README.md CHANGED
@@ -78,16 +78,47 @@ The model is intended for categorizing the arc of conversation texts. It can be
78
 
79
  To use this model for inference, you need to load the fine-tuned model and tokenizer. Here is an example of how to do this using the `transformers` library:
80
 
 
81
  ```python
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
  from transformers import pipeline
83
 
84
- # Generate the Arc of the conversation
85
- convo2 = """ Your conversation text Here"""
86
 
87
- summarizer = pipeline("summarization", model='Falconsai/arc_of_conversation', tokenizer='Falconsai/arc_of_conversation')
88
- result = summarizer(convo2, max_length=2048, min_length=1024, do_sample=False)
89
  ```
90
 
 
91
  ## Training
92
 
93
  The training process involves the following steps:
@@ -125,7 +156,7 @@ If you use this model in your research, please cite it as follows:
125
  ```
126
  @misc{conversation_arc_predictor,
127
  author = {Michael Stattelman},
128
- title = {Conversation Arc Predictor},
129
  year = {2024},
130
  publisher = {Falcons.ai},
131
  }
 
78
 
79
  To use this model for inference, you need to load the fine-tuned model and tokenizer. Here is an example of how to do this using the `transformers` library:
80
 
81
+ Running on CPU
82
  ```python
83
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
84
+
85
+ tokenizer = T5Tokenizer.from_pretrained('Falconsai/arc_of_conversation')
86
+ model = T5ForConditionalGeneration.from_pretrained('Falconsai/arc_of_conversation')
87
+
88
+ input_text = "Your conversation Here"
89
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids
90
+
91
+ outputs = model.generate(input_ids)
92
+ print(tokenizer.decode(outputs[0]))
93
+ ```
94
+
95
+ Running on GPU
96
+ ```python
97
+ # pip install accelerate
98
+ from transformers import T5Tokenizer, T5ForConditionalGeneration
99
+
100
+ tokenizer = T5Tokenizer.from_pretrained('Falconsai/arc_of_conversation')
101
+ model = T5ForConditionalGeneration.from_pretrained('Falconsai/arc_of_conversation', device_map="auto")
102
+
103
+ input_text = "Your conversation Here"
104
+ input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
105
+
106
+ outputs = model.generate(input_ids)
107
+ print(tokenizer.decode(outputs[0]))
108
+
109
+ ```
110
+
111
+
112
+ Running Pipeline
113
+ ```python
114
+ # Use a pipeline as a high-level helper
115
  from transformers import pipeline
116
 
117
+ pipe = pipeline("summarization", model="Falconsai/arc_of_conversation")
 
118
 
 
 
119
  ```
120
 
121
+
122
  ## Training
123
 
124
  The training process involves the following steps:
 
156
  ```
157
  @misc{conversation_arc_predictor,
158
  author = {Michael Stattelman},
159
+ title = {Arc of the Conversation Generator},
160
  year = {2024},
161
  publisher = {Falcons.ai},
162
  }