Teja-Gollapudi
commited on
Commit
•
ac15f12
1
Parent(s):
4ad9cc1
Update README.md
Browse files
README.md
CHANGED
@@ -20,26 +20,29 @@ pipeline_tag: text2text-generation
|
|
20 |
|
21 |
|
22 |
# Intended Use
|
|
|
|
|
|
|
23 |
|
24 |
|
25 |
# How to Use
|
26 |
|
27 |
-
|
28 |
from transformers import pipeline
|
29 |
|
30 |
|
31 |
|
32 |
-
|
33 |
|
34 |
|
35 |
# Training Details
|
36 |
|
37 |
The model was trained on 3xV100 GPUs
|
38 |
|
39 |
-
Hyperparameters:
|
40 |
-
learning_rate = 5e-5
|
41 |
-
batch_size = 128
|
42 |
-
epochs = 3
|
43 |
|
44 |
|
45 |
```
|
|
|
20 |
|
21 |
|
22 |
# Intended Use
|
23 |
+
The model is intended for <b>research purposes only.</b>
|
24 |
+
|
25 |
+
While the base Flan-T5 model is open-sourced under Apache 2.0, the licensing is restricted by the Alpaca dataset created using text-DaVinci-3 which prevents its usage in commercial settings.
|
26 |
|
27 |
|
28 |
# How to Use
|
29 |
|
30 |
+
|
31 |
from transformers import pipeline
|
32 |
|
33 |
|
34 |
|
35 |
+
|
36 |
|
37 |
|
38 |
# Training Details
|
39 |
|
40 |
The model was trained on 3xV100 GPUs
|
41 |
|
42 |
+
* Hyperparameters:
|
43 |
+
* learning_rate = 5e-5
|
44 |
+
* batch_size = 128
|
45 |
+
* epochs = 3
|
46 |
|
47 |
|
48 |
```
|