Update ReadMe
Browse files
README.md
CHANGED
@@ -15,16 +15,26 @@ widget:
|
|
15 |
|
16 |
## Team Members
|
17 |
- FirstName LastName ([hf_user](https://huggingface.co/hf_user))
|
18 |
-
|
|
|
|
|
|
|
|
|
19 |
|
20 |
## Dataset
|
21 |
-
|
22 |
-
... SOON
|
23 |
|
24 |
## How To Use
|
25 |
-
|
26 |
-
|
27 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
## Demo
|
29 |
|
30 |
... SOON
|
|
|
15 |
|
16 |
## Team Members
|
17 |
- FirstName LastName ([hf_user](https://huggingface.co/hf_user))
|
18 |
+
- [Mehrdad Farahani](huggingface.co/m3hrdadfi)
|
19 |
+
- [Saied Alimoradi](https://discuss.huggingface.co/u/saied)
|
20 |
+
- [M. Reza Zerehpoosh](huggingface.co/ironcladgeek)
|
21 |
+
- [Hooman Sedghamiz](https://discuss.huggingface.co/u/hooman650)
|
22 |
+
- [Mazeyar Moeini Feizabadi](https://discuss.huggingface.co/u/mazy1998)
|
23 |
|
24 |
## Dataset
|
25 |
+
We used [Oscar](https://huggingface.co/datasets/oscar) dataset, which is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus.
|
|
|
26 |
|
27 |
## How To Use
|
28 |
+
You can use this model directly with a pipeline for text generation.
|
29 |
+
|
30 |
+
```python
|
31 |
+
from transformers import pipeline, AutoTokenizer, GPT2LMHeadModel
|
32 |
+
tokenizer = AutoTokenizer.from_pretrained('flax-community/gpt2-medium-persian')
|
33 |
+
model = GPT2LMHeadModel.from_pretrained('flax-community/gpt2-medium-persian')
|
34 |
+
generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':100})
|
35 |
+
generated_text = generator('در یک اتفاق شگفت انگیز، پژوهشگران')
|
36 |
+
```
|
37 |
+
For using Tensorflow import TFGPT2LMHeadModel instead of GPT2LMHeadModel.
|
38 |
## Demo
|
39 |
|
40 |
... SOON
|