beyond commited on
Commit
120634a
1 Parent(s): 4adb3aa

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SEGA-large model
2
+
3
+ SEGA: SkEtch-based Generative Augmentation
4
+ SEGA is a general text augmentation model that can be used for data augmentation for various NLP tasks (including sentiment analysis, topic classification, NER, and QA). SEGA uses an encoder-decoder structure (based on the BART architecture) and is pre-trained on the C4-realnewslike corpus.
5
+
6
+ - Paper: [this paper](to_be_added)
7
+ - Github: [this repository](to_be_added).
8
+
9
+
10
+ ## Model description
11
+
12
+
13
+ ## Model variations
14
+
15
+
16
+ | Model | #params | Language |
17
+ |------------------------|--------------------------------|-------|
18
+ | [`sega-large`](https://huggingface.co/bert-base-uncased) | xM | English |
19
+ | [`sega-base`](https://huggingface.co/bert-large-uncased) | xM | English |
20
+ | [`sega-small`](https://huggingface.co/bert-base-cased) | xM | English |
21
+ | [`sega-large-chinese`](https://huggingface.co/bert-large-cased) | xM | Chinese |
22
+ | [`sega-base-chinese`](https://huggingface.co/bert-base-chinese) | xM | Chinese |
23
+ | [`sega-small-chinese`](https://huggingface.co/bert-base-multilingual-cased) | xM | Chinese |
24
+
25
+
26
+ ## Intended uses & limitations
27
+
28
+
29
+
30
+ ### How to use
31
+
32
+
33
+ ### Limitations and bias
34
+
35
+
36
+ ## Training data
37
+
38
+
39
+ ## Training procedure
40
+
41
+ ### Preprocessing
42
+
43
+
44
+ ### Pretraining
45
+
46
+ ## Evaluation results
47
+
48
+
49
+
50
+ ### BibTeX entry and citation info
51
+