Spaces:
Runtime error
Runtime error
Upload folder using huggingface_hub
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- __pycache__/config.cpython-39.pyc +0 -0
- __pycache__/emo_gen.cpython-39.pyc +0 -0
- __pycache__/infer.cpython-39.pyc +0 -0
- __pycache__/models.cpython-39.pyc +0 -0
- __pycache__/presets.cpython-39.pyc +0 -0
- bert/deberta-v2-large-japanese-char-wwm/.gitattributes +34 -0
- bert/deberta-v2-large-japanese-char-wwm/README.md +89 -0
- bert/deberta-v2-large-japanese-char-wwm/config.json +37 -0
- bert/deberta-v2-large-japanese-char-wwm/pytorch_model.bin +3 -0
- bert/deberta-v2-large-japanese-char-wwm/special_tokens_map.json +7 -0
- bert/deberta-v2-large-japanese-char-wwm/tokenizer_config.json +19 -0
- bert/deberta-v2-large-japanese-char-wwm/vocab.txt +0 -0
- config.py +6 -0
- config.yml +17 -3
- config_bk.yml +160 -0
- emo_gen.py +169 -0
- emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/.gitattributes +28 -0
- emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/LICENSE +437 -0
- emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/README.md +127 -0
- emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/config.json +122 -0
- emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/preprocessor_config.json +9 -0
- emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/pytorch_model.bin +3 -0
- emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/vocab.json +1 -0
- infer.py +152 -13
- models.py +48 -6
- oldVersion/V101/__init__.py +2 -0
- oldVersion/V101/__pycache__/__init__.cpython-39.pyc +0 -0
- oldVersion/V101/__pycache__/models.cpython-39.pyc +0 -0
- oldVersion/V101/text/__pycache__/__init__.cpython-39.pyc +0 -0
- oldVersion/V101/text/__pycache__/chinese.cpython-39.pyc +0 -0
- oldVersion/V101/text/__pycache__/cleaner.cpython-39.pyc +0 -0
- oldVersion/V101/text/__pycache__/symbols.cpython-39.pyc +0 -0
- oldVersion/V101/text/__pycache__/tone_sandhi.cpython-39.pyc +0 -0
- oldVersion/V110/__pycache__/__init__.cpython-39.pyc +0 -0
- oldVersion/V110/__pycache__/models.cpython-39.pyc +0 -0
- oldVersion/V110/text/__pycache__/__init__.cpython-39.pyc +0 -0
- oldVersion/V110/text/__pycache__/chinese.cpython-39.pyc +0 -0
- oldVersion/V110/text/__pycache__/cleaner.cpython-39.pyc +0 -0
- oldVersion/V110/text/__pycache__/japanese.cpython-39.pyc +0 -0
- oldVersion/V110/text/__pycache__/symbols.cpython-39.pyc +0 -0
- oldVersion/V110/text/__pycache__/tone_sandhi.cpython-39.pyc +0 -0
- oldVersion/V111/__pycache__/__init__.cpython-39.pyc +0 -0
- oldVersion/V111/__pycache__/models.cpython-39.pyc +0 -0
- oldVersion/V111/text/__pycache__/__init__.cpython-39.pyc +0 -0
- oldVersion/V111/text/__pycache__/chinese.cpython-39.pyc +0 -0
- oldVersion/V111/text/__pycache__/cleaner.cpython-39.pyc +0 -0
- oldVersion/V111/text/__pycache__/japanese.cpython-39.pyc +0 -0
- oldVersion/V111/text/__pycache__/symbols.cpython-39.pyc +0 -0
- oldVersion/V111/text/__pycache__/tone_sandhi.cpython-39.pyc +0 -0
- oldVersion/V111/text/fix/__pycache__/__init__.cpython-39.pyc +0 -0
__pycache__/config.cpython-39.pyc
CHANGED
Binary files a/__pycache__/config.cpython-39.pyc and b/__pycache__/config.cpython-39.pyc differ
|
|
__pycache__/emo_gen.cpython-39.pyc
ADDED
Binary file (4.99 kB). View file
|
|
__pycache__/infer.cpython-39.pyc
CHANGED
Binary files a/__pycache__/infer.cpython-39.pyc and b/__pycache__/infer.cpython-39.pyc differ
|
|
__pycache__/models.cpython-39.pyc
CHANGED
Binary files a/__pycache__/models.cpython-39.pyc and b/__pycache__/models.cpython-39.pyc differ
|
|
__pycache__/presets.cpython-39.pyc
CHANGED
Binary files a/__pycache__/presets.cpython-39.pyc and b/__pycache__/presets.cpython-39.pyc differ
|
|
bert/deberta-v2-large-japanese-char-wwm/.gitattributes
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
29 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
30 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
31 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
32 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
33 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
34 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
bert/deberta-v2-large-japanese-char-wwm/README.md
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: ja
|
3 |
+
license: cc-by-sa-4.0
|
4 |
+
library_name: transformers
|
5 |
+
tags:
|
6 |
+
- deberta
|
7 |
+
- deberta-v2
|
8 |
+
- fill-mask
|
9 |
+
- character
|
10 |
+
- wwm
|
11 |
+
datasets:
|
12 |
+
- wikipedia
|
13 |
+
- cc100
|
14 |
+
- oscar
|
15 |
+
metrics:
|
16 |
+
- accuracy
|
17 |
+
mask_token: "[MASK]"
|
18 |
+
widget:
|
19 |
+
- text: "京都大学で自然言語処理を[MASK][MASK]する。"
|
20 |
+
---
|
21 |
+
|
22 |
+
# Model Card for Japanese character-level DeBERTa V2 large
|
23 |
+
|
24 |
+
## Model description
|
25 |
+
|
26 |
+
This is a Japanese DeBERTa V2 large model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR.
|
27 |
+
This model is trained with character-level tokenization and whole word masking.
|
28 |
+
|
29 |
+
## How to use
|
30 |
+
|
31 |
+
You can use this model for masked language modeling as follows:
|
32 |
+
|
33 |
+
```python
|
34 |
+
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
35 |
+
tokenizer = AutoTokenizer.from_pretrained('ku-nlp/deberta-v2-large-japanese-char-wwm')
|
36 |
+
model = AutoModelForMaskedLM.from_pretrained('ku-nlp/deberta-v2-large-japanese-char-wwm')
|
37 |
+
|
38 |
+
sentence = '京都大学で自然言語処理を[MASK][MASK]する。'
|
39 |
+
encoding = tokenizer(sentence, return_tensors='pt')
|
40 |
+
...
|
41 |
+
```
|
42 |
+
|
43 |
+
You can also fine-tune this model on downstream tasks.
|
44 |
+
|
45 |
+
## Tokenization
|
46 |
+
|
47 |
+
There is no need to tokenize texts in advance, and you can give raw texts to the tokenizer.
|
48 |
+
The texts are tokenized into character-level tokens by [sentencepiece](https://github.com/google/sentencepiece).
|
49 |
+
|
50 |
+
## Training data
|
51 |
+
|
52 |
+
We used the following corpora for pre-training:
|
53 |
+
|
54 |
+
- Japanese Wikipedia (as of 20221020, 3.2GB, 27M sentences, 1.3M documents)
|
55 |
+
- Japanese portion of CC-100 (85GB, 619M sentences, 66M documents)
|
56 |
+
- Japanese portion of OSCAR (54GB, 326M sentences, 25M documents)
|
57 |
+
|
58 |
+
Note that we filtered out documents annotated with "header", "footer", or "noisy" tags in OSCAR.
|
59 |
+
Also note that Japanese Wikipedia was duplicated 10 times to make the total size of the corpus comparable to that of CC-100 and OSCAR. As a result, the total size of the training data is 171GB.
|
60 |
+
|
61 |
+
## Training procedure
|
62 |
+
|
63 |
+
We first segmented texts in the corpora into words using [Juman++ 2.0.0-rc3](https://github.com/ku-nlp/jumanpp/releases/tag/v2.0.0-rc3) for whole word masking.
|
64 |
+
Then, we built a sentencepiece model with 22,012 tokens including all characters that appear in the training corpus.
|
65 |
+
|
66 |
+
We tokenized raw corpora into character-level subwords using the sentencepiece model and trained the Japanese DeBERTa model using [transformers](https://github.com/huggingface/transformers) library.
|
67 |
+
The training took 26 days using 16 NVIDIA A100-SXM4-40GB GPUs.
|
68 |
+
|
69 |
+
The following hyperparameters were used during pre-training:
|
70 |
+
|
71 |
+
- learning_rate: 1e-4
|
72 |
+
- per_device_train_batch_size: 26
|
73 |
+
- distributed_type: multi-GPU
|
74 |
+
- num_devices: 16
|
75 |
+
- gradient_accumulation_steps: 8
|
76 |
+
- total_train_batch_size: 3,328
|
77 |
+
- max_seq_length: 512
|
78 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
|
79 |
+
- lr_scheduler_type: linear schedule with warmup (lr = 0 at 300k steps)
|
80 |
+
- training_steps: 260,000
|
81 |
+
- warmup_steps: 10,000
|
82 |
+
|
83 |
+
The accuracy of the trained model on the masked language modeling task was 0.795.
|
84 |
+
The evaluation set consists of 5,000 randomly sampled documents from each of the training corpora.
|
85 |
+
|
86 |
+
## Acknowledgments
|
87 |
+
|
88 |
+
This work was supported by Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN) through General Collaboration Project no. jh221004, "Developing a Platform for Constructing and Sharing of Large-Scale Japanese Language Models".
|
89 |
+
For training models, we used the mdx: a platform for the data-driven future.
|
bert/deberta-v2-large-japanese-char-wwm/config.json
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"architectures": [
|
3 |
+
"DebertaV2ForMaskedLM"
|
4 |
+
],
|
5 |
+
"attention_head_size": 64,
|
6 |
+
"attention_probs_dropout_prob": 0.1,
|
7 |
+
"conv_act": "gelu",
|
8 |
+
"conv_kernel_size": 3,
|
9 |
+
"hidden_act": "gelu",
|
10 |
+
"hidden_dropout_prob": 0.1,
|
11 |
+
"hidden_size": 1024,
|
12 |
+
"initializer_range": 0.02,
|
13 |
+
"intermediate_size": 4096,
|
14 |
+
"layer_norm_eps": 1e-07,
|
15 |
+
"max_position_embeddings": 512,
|
16 |
+
"max_relative_positions": -1,
|
17 |
+
"model_type": "deberta-v2",
|
18 |
+
"norm_rel_ebd": "layer_norm",
|
19 |
+
"num_attention_heads": 16,
|
20 |
+
"num_hidden_layers": 24,
|
21 |
+
"pad_token_id": 0,
|
22 |
+
"pooler_dropout": 0,
|
23 |
+
"pooler_hidden_act": "gelu",
|
24 |
+
"pooler_hidden_size": 1024,
|
25 |
+
"pos_att_type": [
|
26 |
+
"p2c",
|
27 |
+
"c2p"
|
28 |
+
],
|
29 |
+
"position_biased_input": false,
|
30 |
+
"position_buckets": 256,
|
31 |
+
"relative_attention": true,
|
32 |
+
"share_att_key": true,
|
33 |
+
"torch_dtype": "float16",
|
34 |
+
"transformers_version": "4.25.1",
|
35 |
+
"type_vocab_size": 0,
|
36 |
+
"vocab_size": 22012
|
37 |
+
}
|
bert/deberta-v2-large-japanese-char-wwm/pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bf0dab8ad87bd7c22e85ec71e04f2240804fda6d33196157d6b5923af6ea1201
|
3 |
+
size 1318456639
|
bert/deberta-v2-large-japanese-char-wwm/special_tokens_map.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cls_token": "[CLS]",
|
3 |
+
"mask_token": "[MASK]",
|
4 |
+
"pad_token": "[PAD]",
|
5 |
+
"sep_token": "[SEP]",
|
6 |
+
"unk_token": "[UNK]"
|
7 |
+
}
|
bert/deberta-v2-large-japanese-char-wwm/tokenizer_config.json
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cls_token": "[CLS]",
|
3 |
+
"do_lower_case": false,
|
4 |
+
"do_subword_tokenize": true,
|
5 |
+
"do_word_tokenize": true,
|
6 |
+
"jumanpp_kwargs": null,
|
7 |
+
"mask_token": "[MASK]",
|
8 |
+
"mecab_kwargs": null,
|
9 |
+
"model_max_length": 1000000000000000019884624838656,
|
10 |
+
"never_split": null,
|
11 |
+
"pad_token": "[PAD]",
|
12 |
+
"sep_token": "[SEP]",
|
13 |
+
"special_tokens_map_file": null,
|
14 |
+
"subword_tokenizer_type": "character",
|
15 |
+
"sudachi_kwargs": null,
|
16 |
+
"tokenizer_class": "BertJapaneseTokenizer",
|
17 |
+
"unk_token": "[UNK]",
|
18 |
+
"word_tokenizer_type": "basic"
|
19 |
+
}
|
bert/deberta-v2-large-japanese-char-wwm/vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
config.py
CHANGED
@@ -120,11 +120,17 @@ class Train_ms_config:
|
|
120 |
env: Dict[str, any],
|
121 |
base: Dict[str, any],
|
122 |
model: str,
|
|
|
|
|
|
|
123 |
):
|
124 |
self.env = env # 需要加载的环境变量
|
125 |
self.base = base # 底模配置
|
126 |
self.model = model # 训练模型存储目录,该路径为相对于dataset_path的路径,而非项目根目录
|
127 |
self.config_path = config_path # 配置文件路径
|
|
|
|
|
|
|
128 |
|
129 |
@classmethod
|
130 |
def from_dict(cls, dataset_path: str, data: Dict[str, any]):
|
|
|
120 |
env: Dict[str, any],
|
121 |
base: Dict[str, any],
|
122 |
model: str,
|
123 |
+
num_workers: int,
|
124 |
+
spec_cache: bool,
|
125 |
+
keep_ckpts: int,
|
126 |
):
|
127 |
self.env = env # 需要加载的环境变量
|
128 |
self.base = base # 底模配置
|
129 |
self.model = model # 训练模型存储目录,该路径为相对于dataset_path的路径,而非项目根目录
|
130 |
self.config_path = config_path # 配置文件路径
|
131 |
+
self.num_workers = num_workers # worker数量
|
132 |
+
self.spec_cache = spec_cache # 是否启用spec缓存
|
133 |
+
self.keep_ckpts = keep_ckpts # ckpt数量
|
134 |
|
135 |
@classmethod
|
136 |
def from_dict(cls, dataset_path: str, data: Dict[str, any]):
|
config.yml
CHANGED
@@ -56,19 +56,27 @@ bert_gen:
|
|
56 |
# 使用多卡推理
|
57 |
use_multi_device: false
|
58 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
|
60 |
# train 训练配置
|
61 |
# 注意, “:” 后需要加空格
|
62 |
train_ms:
|
63 |
-
# 需要加载的环境变量,多显卡训练时RANK请手动在环境变量填写
|
64 |
-
# 环境变量对应名称环境变量不存在时加载,也就是说手动添加的环境变量优先级更高,会覆盖本配置文件
|
65 |
env:
|
66 |
MASTER_ADDR: "localhost"
|
67 |
MASTER_PORT: 10086
|
68 |
WORLD_SIZE: 1
|
|
|
69 |
RANK: 0
|
70 |
# 可以填写任意名的环境变量
|
71 |
-
THE_ENV_VAR_YOU_NEED_TO_USE: "1234567"
|
72 |
# 底模设置
|
73 |
base:
|
74 |
use_base_model: true
|
@@ -78,6 +86,12 @@ train_ms:
|
|
78 |
model: "models"
|
79 |
# 配置文件路径
|
80 |
config_path: "config.json"
|
|
|
|
|
|
|
|
|
|
|
|
|
81 |
|
82 |
|
83 |
# webui webui配置
|
|
|
56 |
# 使用多卡推理
|
57 |
use_multi_device: false
|
58 |
|
59 |
+
# emo_gen 相关配置
|
60 |
+
# 注意, “:” 后需要加空格
|
61 |
+
emo_gen:
|
62 |
+
# 训练数据集配置文件路径
|
63 |
+
config_path: "Data/TalkFlower_CNzh/config.json"
|
64 |
+
# 并行数
|
65 |
+
num_processes: 2
|
66 |
+
# 使用设备:可选项 "cuda" 显卡推理,"cpu" cpu推理
|
67 |
+
device: "cuda"
|
68 |
|
69 |
# train 训练配置
|
70 |
# 注意, “:” 后需要加空格
|
71 |
train_ms:
|
|
|
|
|
72 |
env:
|
73 |
MASTER_ADDR: "localhost"
|
74 |
MASTER_PORT: 10086
|
75 |
WORLD_SIZE: 1
|
76 |
+
LOCAL_RANK: 0
|
77 |
RANK: 0
|
78 |
# 可以填写任意名的环境变量
|
79 |
+
# THE_ENV_VAR_YOU_NEED_TO_USE: "1234567"
|
80 |
# 底模设置
|
81 |
base:
|
82 |
use_base_model: true
|
|
|
86 |
model: "models"
|
87 |
# 配置文件路径
|
88 |
config_path: "config.json"
|
89 |
+
# 训练使用的worker,不建议超过CPU核心数
|
90 |
+
num_workers: 16
|
91 |
+
# 关闭此项可以节约接近50%的磁盘空间,但是可能导致实际训练速度变慢和更高的CPU使用率。
|
92 |
+
spec_cache: True
|
93 |
+
# 保存的检查点数量,多于此数目的权重会被删除来节省空间。
|
94 |
+
keep_ckpts: 8
|
95 |
|
96 |
|
97 |
# webui webui配置
|
config_bk.yml
ADDED
@@ -0,0 +1,160 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# 全局配置
|
2 |
+
# 对于希望在同一时间使用多个配置文件的情况,例如两个GPU同时跑两个训练集:通过环境变量指定配置文件,不指定则默认为./config.yml
|
3 |
+
|
4 |
+
# 拟提供通用路径配置,统一存放数据,避免数据放得很乱
|
5 |
+
# 每个数据集与其对应的模型存放至统一路径下,后续所有的路径配置均为相对于datasetPath的路径
|
6 |
+
# 不填或者填空则路径为相对于项目根目录的路径
|
7 |
+
dataset_path: "Data/TalkFlower_CNzh"
|
8 |
+
|
9 |
+
# 模型镜像源,默认huggingface,使用openi镜像源需指定openi_token
|
10 |
+
mirror: ""
|
11 |
+
openi_token: "" # openi token
|
12 |
+
|
13 |
+
# resample 音频重采样配置
|
14 |
+
# 注意, “:” 后需要加空格
|
15 |
+
resample:
|
16 |
+
# 目标重采样率
|
17 |
+
sampling_rate: 44100
|
18 |
+
# 音频文件输入路径,重采样会将该路径下所有.wav音频文件重采样
|
19 |
+
# 请填入相对于datasetPath的相对路径
|
20 |
+
in_dir: "audios/raw" # 相对于根目录的路径为 /datasetPath/in_dir
|
21 |
+
# 音频文件重采样后输出路径
|
22 |
+
out_dir: "audios/wavs"
|
23 |
+
|
24 |
+
|
25 |
+
# preprocess_text 数据集预处理相关配置
|
26 |
+
# 注意, “:” 后需要加空格
|
27 |
+
preprocess_text:
|
28 |
+
# 原始文本文件路径,文本格式应为{wav_path}|{speaker_name}|{language}|{text}。
|
29 |
+
transcription_path: "filelists/TalkFlower_CNzh.list"
|
30 |
+
# 数据清洗后文本路径,可以不填。不填则将在原始文本目录生成
|
31 |
+
cleaned_path: ""
|
32 |
+
# 训练集路径
|
33 |
+
train_path: "filelists/train.list"
|
34 |
+
# 验证集路径
|
35 |
+
val_path: "filelists/val.list"
|
36 |
+
# 配置文件路径
|
37 |
+
config_path: "Data/TalkFlower_CNzh/config.json"
|
38 |
+
# 每个speaker的验证集条数
|
39 |
+
val_per_spk: 5
|
40 |
+
# 验证集最大条数,多于的会被截断并放到训练集中
|
41 |
+
max_val_total: 12
|
42 |
+
# 是否进行数据清洗
|
43 |
+
clean: true
|
44 |
+
|
45 |
+
|
46 |
+
# bert_gen 相关配置
|
47 |
+
# 注意, “:” 后需要加空格
|
48 |
+
bert_gen:
|
49 |
+
# 训练数据集配置文件路径
|
50 |
+
config_path: "Data/TalkFlower_CNzh/config.json"
|
51 |
+
# 并行数
|
52 |
+
num_processes: 8
|
53 |
+
# 使用设备:可选项 "cuda" 显卡推理,"cpu" cpu推理
|
54 |
+
# 该选项同时决定了get_bert_feature的默认设备
|
55 |
+
device: "cuda"
|
56 |
+
# 使用多卡推理
|
57 |
+
use_multi_device: false
|
58 |
+
|
59 |
+
|
60 |
+
# train 训练配置
|
61 |
+
# 注意, “:” 后需要加空格
|
62 |
+
train_ms:
|
63 |
+
# 需要加载的环境变量,多显卡训练时RANK请手动在环境变量填写
|
64 |
+
# 环境变量对应名称环境变量不存在时加载,也就是说手动添加的环境变量优先级更高,会覆盖本配置文件
|
65 |
+
env:
|
66 |
+
MASTER_ADDR: "localhost"
|
67 |
+
MASTER_PORT: 10086
|
68 |
+
WORLD_SIZE: 1
|
69 |
+
RANK: 0
|
70 |
+
# 可以填写任意名的环境变量
|
71 |
+
THE_ENV_VAR_YOU_NEED_TO_USE: "1234567"
|
72 |
+
# 底模设置
|
73 |
+
base:
|
74 |
+
use_base_model: true
|
75 |
+
repo_id: "Stardust_minus/Bert-VITS2"
|
76 |
+
model_image: "Bert-VITS2中日底模" # openi网页的模型名
|
77 |
+
# 训练模型存储目录:与旧版本的区别,原先数据集是存放在logs/model_name下的,现在改为统一存放在Data/你的数据集/models下
|
78 |
+
model: "models"
|
79 |
+
# 配置文件路径
|
80 |
+
config_path: "config.json"
|
81 |
+
|
82 |
+
|
83 |
+
# webui webui配置
|
84 |
+
# 注意, “:” 后需要加空格
|
85 |
+
webui:
|
86 |
+
# 推理设备
|
87 |
+
device: "cpu"
|
88 |
+
# 模型路径
|
89 |
+
model: "../../models/G_48000.pth"
|
90 |
+
# 配置文件路径
|
91 |
+
config_path: "config.json"
|
92 |
+
# 端口号
|
93 |
+
port: 7860
|
94 |
+
# 是否公开部署,对外网开放
|
95 |
+
share: false
|
96 |
+
# 是否开启debug模式
|
97 |
+
debug: false
|
98 |
+
# 语种识别库,可选langid, fastlid
|
99 |
+
language_identification_library: "langid"
|
100 |
+
|
101 |
+
|
102 |
+
# server api配置
|
103 |
+
# 注意, “:” 后需要加空格
|
104 |
+
# 注意,本配置下的所有配置均为相对于根目录的路径
|
105 |
+
server:
|
106 |
+
# 端口号
|
107 |
+
port: 5000
|
108 |
+
# 模型默认使用设备:但是当前并没有实现这个配置。
|
109 |
+
device: "cuda"
|
110 |
+
# 需要加载的所有模型的配置
|
111 |
+
# 注意,所有模型都必须正确配置model与config的路径,空路径会导致加载错误。
|
112 |
+
models:
|
113 |
+
- # 模型的路径
|
114 |
+
model: "models/G_48000.pth"
|
115 |
+
# 模型config.json的路径
|
116 |
+
config: "TalkFlower_CNzh/config.json"
|
117 |
+
# 模型使用设备,若填写则会覆盖默认配置
|
118 |
+
device: "cuda"
|
119 |
+
# 模型默认使用的语言
|
120 |
+
language: "ZH"
|
121 |
+
# 模型人物默认参数
|
122 |
+
# 不必填写所有人物,不填的使用默认值
|
123 |
+
# 暂时不用填写,当前尚未实现按人区分配置
|
124 |
+
speakers:
|
125 |
+
- speaker: "科比"
|
126 |
+
sdp_ratio: 0.2
|
127 |
+
noise_scale: 0.6
|
128 |
+
noise_scale_w: 0.8
|
129 |
+
length_scale: 1
|
130 |
+
- speaker: "五条悟"
|
131 |
+
sdp_ratio: 0.3
|
132 |
+
noise_scale: 0.7
|
133 |
+
noise_scale_w: 0.8
|
134 |
+
length_scale: 0.5
|
135 |
+
- speaker: "安倍晋三"
|
136 |
+
sdp_ratio: 0.2
|
137 |
+
noise_scale: 0.6
|
138 |
+
noise_scale_w: 0.8
|
139 |
+
length_scale: 1.2
|
140 |
+
- # 模型的路径
|
141 |
+
model: ""
|
142 |
+
# 模型config.json的路径
|
143 |
+
config: ""
|
144 |
+
# 模型使用设备,若填写则会覆盖默认配置
|
145 |
+
device: "cpu"
|
146 |
+
# 模型默认使用的语言
|
147 |
+
language: "JP"
|
148 |
+
# 模型人物默认参数
|
149 |
+
# 不必填写所有人物,不填的使用默认值
|
150 |
+
speakers: [ ] # 也可以不填
|
151 |
+
|
152 |
+
|
153 |
+
# 百度翻译开放平台 api配置
|
154 |
+
# api接入文档 https://api.fanyi.baidu.com/doc/21
|
155 |
+
# 请不要在github等网站公开分享你的app id 与 key
|
156 |
+
translate:
|
157 |
+
# 你的APPID
|
158 |
+
"app_key": ""
|
159 |
+
# 你的密钥
|
160 |
+
"secret_key": ""
|
emo_gen.py
ADDED
@@ -0,0 +1,169 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import torch
|
2 |
+
import torch.nn as nn
|
3 |
+
from torch.utils.data import Dataset
|
4 |
+
from torch.utils.data import DataLoader
|
5 |
+
from transformers import Wav2Vec2Processor
|
6 |
+
from transformers.models.wav2vec2.modeling_wav2vec2 import (
|
7 |
+
Wav2Vec2Model,
|
8 |
+
Wav2Vec2PreTrainedModel,
|
9 |
+
)
|
10 |
+
import librosa
|
11 |
+
import numpy as np
|
12 |
+
import argparse
|
13 |
+
from config import config
|
14 |
+
import utils
|
15 |
+
import os
|
16 |
+
from tqdm import tqdm
|
17 |
+
|
18 |
+
|
19 |
+
class RegressionHead(nn.Module):
|
20 |
+
r"""Classification head."""
|
21 |
+
|
22 |
+
def __init__(self, config):
|
23 |
+
super().__init__()
|
24 |
+
|
25 |
+
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
|
26 |
+
self.dropout = nn.Dropout(config.final_dropout)
|
27 |
+
self.out_proj = nn.Linear(config.hidden_size, config.num_labels)
|
28 |
+
|
29 |
+
def forward(self, features, **kwargs):
|
30 |
+
x = features
|
31 |
+
x = self.dropout(x)
|
32 |
+
x = self.dense(x)
|
33 |
+
x = torch.tanh(x)
|
34 |
+
x = self.dropout(x)
|
35 |
+
x = self.out_proj(x)
|
36 |
+
|
37 |
+
return x
|
38 |
+
|
39 |
+
|
40 |
+
class EmotionModel(Wav2Vec2PreTrainedModel):
|
41 |
+
r"""Speech emotion classifier."""
|
42 |
+
|
43 |
+
def __init__(self, config):
|
44 |
+
super().__init__(config)
|
45 |
+
|
46 |
+
self.config = config
|
47 |
+
self.wav2vec2 = Wav2Vec2Model(config)
|
48 |
+
self.classifier = RegressionHead(config)
|
49 |
+
self.init_weights()
|
50 |
+
|
51 |
+
def forward(
|
52 |
+
self,
|
53 |
+
input_values,
|
54 |
+
):
|
55 |
+
outputs = self.wav2vec2(input_values)
|
56 |
+
hidden_states = outputs[0]
|
57 |
+
hidden_states = torch.mean(hidden_states, dim=1)
|
58 |
+
logits = self.classifier(hidden_states)
|
59 |
+
|
60 |
+
return hidden_states, logits
|
61 |
+
|
62 |
+
|
63 |
+
class AudioDataset(Dataset):
|
64 |
+
def __init__(self, list_of_wav_files, sr, processor):
|
65 |
+
self.list_of_wav_files = list_of_wav_files
|
66 |
+
self.processor = processor
|
67 |
+
self.sr = sr
|
68 |
+
|
69 |
+
def __len__(self):
|
70 |
+
return len(self.list_of_wav_files)
|
71 |
+
|
72 |
+
def __getitem__(self, idx):
|
73 |
+
wav_file = self.list_of_wav_files[idx]
|
74 |
+
audio_data, _ = librosa.load(wav_file, sr=self.sr)
|
75 |
+
processed_data = self.processor(audio_data, sampling_rate=self.sr)[
|
76 |
+
"input_values"
|
77 |
+
][0]
|
78 |
+
return torch.from_numpy(processed_data)
|
79 |
+
|
80 |
+
|
81 |
+
model_name = "./emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim"
|
82 |
+
processor = Wav2Vec2Processor.from_pretrained(model_name)
|
83 |
+
model = EmotionModel.from_pretrained(model_name)
|
84 |
+
|
85 |
+
|
86 |
+
def process_func(
|
87 |
+
x: np.ndarray,
|
88 |
+
sampling_rate: int,
|
89 |
+
model: EmotionModel,
|
90 |
+
processor: Wav2Vec2Processor,
|
91 |
+
device: str,
|
92 |
+
embeddings: bool = False,
|
93 |
+
) -> np.ndarray:
|
94 |
+
r"""Predict emotions or extract embeddings from raw audio signal."""
|
95 |
+
model = model.to(device)
|
96 |
+
y = processor(x, sampling_rate=sampling_rate)
|
97 |
+
y = y["input_values"][0]
|
98 |
+
y = torch.from_numpy(y).unsqueeze(0).to(device)
|
99 |
+
|
100 |
+
# run through model
|
101 |
+
with torch.no_grad():
|
102 |
+
y = model(y)[0 if embeddings else 1]
|
103 |
+
|
104 |
+
# convert to numpy
|
105 |
+
y = y.detach().cpu().numpy()
|
106 |
+
|
107 |
+
return y
|
108 |
+
|
109 |
+
|
110 |
+
def get_emo(path):
|
111 |
+
wav, sr = librosa.load(path, 16000)
|
112 |
+
device = config.bert_gen_config.device
|
113 |
+
return process_func(
|
114 |
+
np.expand_dims(wav, 0).astype(np.float64),
|
115 |
+
sr,
|
116 |
+
model,
|
117 |
+
processor,
|
118 |
+
device,
|
119 |
+
embeddings=True,
|
120 |
+
).squeeze(0)
|
121 |
+
|
122 |
+
|
123 |
+
if __name__ == "__main__":
|
124 |
+
parser = argparse.ArgumentParser()
|
125 |
+
parser.add_argument(
|
126 |
+
"-c", "--config", type=str, default=config.bert_gen_config.config_path
|
127 |
+
)
|
128 |
+
parser.add_argument(
|
129 |
+
"--num_processes", type=int, default=config.bert_gen_config.num_processes
|
130 |
+
)
|
131 |
+
args, _ = parser.parse_known_args()
|
132 |
+
config_path = args.config
|
133 |
+
hps = utils.get_hparams_from_file(config_path)
|
134 |
+
|
135 |
+
device = config.bert_gen_config.device
|
136 |
+
|
137 |
+
model_name = "./emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim"
|
138 |
+
processor = (
|
139 |
+
Wav2Vec2Processor.from_pretrained(model_name)
|
140 |
+
if processor is None
|
141 |
+
else processor
|
142 |
+
)
|
143 |
+
model = (
|
144 |
+
EmotionModel.from_pretrained(model_name).to(device)
|
145 |
+
if model is None
|
146 |
+
else model.to(device)
|
147 |
+
)
|
148 |
+
|
149 |
+
lines = []
|
150 |
+
with open(hps.data.training_files, encoding="utf-8") as f:
|
151 |
+
lines.extend(f.readlines())
|
152 |
+
|
153 |
+
with open(hps.data.validation_files, encoding="utf-8") as f:
|
154 |
+
lines.extend(f.readlines())
|
155 |
+
|
156 |
+
wavnames = [line.split("|")[0] for line in lines]
|
157 |
+
dataset = AudioDataset(wavnames, 16000, processor)
|
158 |
+
data_loader = DataLoader(dataset, batch_size=1, shuffle=False, num_workers=16)
|
159 |
+
|
160 |
+
with torch.no_grad():
|
161 |
+
for i, data in tqdm(enumerate(data_loader), total=len(data_loader)):
|
162 |
+
wavname = wavnames[i]
|
163 |
+
emo_path = wavname.replace(".wav", ".emo.npy")
|
164 |
+
if os.path.exists(emo_path):
|
165 |
+
continue
|
166 |
+
emb = model(data.to(device))[0].detach().cpu().numpy()
|
167 |
+
np.save(emo_path, emb)
|
168 |
+
|
169 |
+
print("Emo vec 生成完毕!")
|
emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/.gitattributes
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
28 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/LICENSE
ADDED
@@ -0,0 +1,437 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Attribution-NonCommercial-ShareAlike 4.0 International
|
2 |
+
|
3 |
+
=======================================================================
|
4 |
+
|
5 |
+
Creative Commons Corporation ("Creative Commons") is not a law firm and
|
6 |
+
does not provide legal services or legal advice. Distribution of
|
7 |
+
Creative Commons public licenses does not create a lawyer-client or
|
8 |
+
other relationship. Creative Commons makes its licenses and related
|
9 |
+
information available on an "as-is" basis. Creative Commons gives no
|
10 |
+
warranties regarding its licenses, any material licensed under their
|
11 |
+
terms and conditions, or any related information. Creative Commons
|
12 |
+
disclaims all liability for damages resulting from their use to the
|
13 |
+
fullest extent possible.
|
14 |
+
|
15 |
+
Using Creative Commons Public Licenses
|
16 |
+
|
17 |
+
Creative Commons public licenses provide a standard set of terms and
|
18 |
+
conditions that creators and other rights holders may use to share
|
19 |
+
original works of authorship and other material subject to copyright
|
20 |
+
and certain other rights specified in the public license below. The
|
21 |
+
following considerations are for informational purposes only, are not
|
22 |
+
exhaustive, and do not form part of our licenses.
|
23 |
+
|
24 |
+
Considerations for licensors: Our public licenses are
|
25 |
+
intended for use by those authorized to give the public
|
26 |
+
permission to use material in ways otherwise restricted by
|
27 |
+
copyright and certain other rights. Our licenses are
|
28 |
+
irrevocable. Licensors should read and understand the terms
|
29 |
+
and conditions of the license they choose before applying it.
|
30 |
+
Licensors should also secure all rights necessary before
|
31 |
+
applying our licenses so that the public can reuse the
|
32 |
+
material as expected. Licensors should clearly mark any
|
33 |
+
material not subject to the license. This includes other CC-
|
34 |
+
licensed material, or material used under an exception or
|
35 |
+
limitation to copyright. More considerations for licensors:
|
36 |
+
wiki.creativecommons.org/Considerations_for_licensors
|
37 |
+
|
38 |
+
Considerations for the public: By using one of our public
|
39 |
+
licenses, a licensor grants the public permission to use the
|
40 |
+
licensed material under specified terms and conditions. If
|
41 |
+
the licensor's permission is not necessary for any reason--for
|
42 |
+
example, because of any applicable exception or limitation to
|
43 |
+
copyright--then that use is not regulated by the license. Our
|
44 |
+
licenses grant only permissions under copyright and certain
|
45 |
+
other rights that a licensor has authority to grant. Use of
|
46 |
+
the licensed material may still be restricted for other
|
47 |
+
reasons, including because others have copyright or other
|
48 |
+
rights in the material. A licensor may make special requests,
|
49 |
+
such as asking that all changes be marked or described.
|
50 |
+
Although not required by our licenses, you are encouraged to
|
51 |
+
respect those requests where reasonable. More considerations
|
52 |
+
for the public:
|
53 |
+
wiki.creativecommons.org/Considerations_for_licensees
|
54 |
+
|
55 |
+
=======================================================================
|
56 |
+
|
57 |
+
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International
|
58 |
+
Public License
|
59 |
+
|
60 |
+
By exercising the Licensed Rights (defined below), You accept and agree
|
61 |
+
to be bound by the terms and conditions of this Creative Commons
|
62 |
+
Attribution-NonCommercial-ShareAlike 4.0 International Public License
|
63 |
+
("Public License"). To the extent this Public License may be
|
64 |
+
interpreted as a contract, You are granted the Licensed Rights in
|
65 |
+
consideration of Your acceptance of these terms and conditions, and the
|
66 |
+
Licensor grants You such rights in consideration of benefits the
|
67 |
+
Licensor receives from making the Licensed Material available under
|
68 |
+
these terms and conditions.
|
69 |
+
|
70 |
+
|
71 |
+
Section 1 -- Definitions.
|
72 |
+
|
73 |
+
a. Adapted Material means material subject to Copyright and Similar
|
74 |
+
Rights that is derived from or based upon the Licensed Material
|
75 |
+
and in which the Licensed Material is translated, altered,
|
76 |
+
arranged, transformed, or otherwise modified in a manner requiring
|
77 |
+
permission under the Copyright and Similar Rights held by the
|
78 |
+
Licensor. For purposes of this Public License, where the Licensed
|
79 |
+
Material is a musical work, performance, or sound recording,
|
80 |
+
Adapted Material is always produced where the Licensed Material is
|
81 |
+
synched in timed relation with a moving image.
|
82 |
+
|
83 |
+
b. Adapter's License means the license You apply to Your Copyright
|
84 |
+
and Similar Rights in Your contributions to Adapted Material in
|
85 |
+
accordance with the terms and conditions of this Public License.
|
86 |
+
|
87 |
+
c. BY-NC-SA Compatible License means a license listed at
|
88 |
+
creativecommons.org/compatiblelicenses, approved by Creative
|
89 |
+
Commons as essentially the equivalent of this Public License.
|
90 |
+
|
91 |
+
d. Copyright and Similar Rights means copyright and/or similar rights
|
92 |
+
closely related to copyright including, without limitation,
|
93 |
+
performance, broadcast, sound recording, and Sui Generis Database
|
94 |
+
Rights, without regard to how the rights are labeled or
|
95 |
+
categorized. For purposes of this Public License, the rights
|
96 |
+
specified in Section 2(b)(1)-(2) are not Copyright and Similar
|
97 |
+
Rights.
|
98 |
+
|
99 |
+
e. Effective Technological Measures means those measures that, in the
|
100 |
+
absence of proper authority, may not be circumvented under laws
|
101 |
+
fulfilling obligations under Article 11 of the WIPO Copyright
|
102 |
+
Treaty adopted on December 20, 1996, and/or similar international
|
103 |
+
agreements.
|
104 |
+
|
105 |
+
f. Exceptions and Limitations means fair use, fair dealing, and/or
|
106 |
+
any other exception or limitation to Copyright and Similar Rights
|
107 |
+
that applies to Your use of the Licensed Material.
|
108 |
+
|
109 |
+
g. License Elements means the license attributes listed in the name
|
110 |
+
of a Creative Commons Public License. The License Elements of this
|
111 |
+
Public License are Attribution, NonCommercial, and ShareAlike.
|
112 |
+
|
113 |
+
h. Licensed Material means the artistic or literary work, database,
|
114 |
+
or other material to which the Licensor applied this Public
|
115 |
+
License.
|
116 |
+
|
117 |
+
i. Licensed Rights means the rights granted to You subject to the
|
118 |
+
terms and conditions of this Public License, which are limited to
|
119 |
+
all Copyright and Similar Rights that apply to Your use of the
|
120 |
+
Licensed Material and that the Licensor has authority to license.
|
121 |
+
|
122 |
+
j. Licensor means the individual(s) or entity(ies) granting rights
|
123 |
+
under this Public License.
|
124 |
+
|
125 |
+
k. NonCommercial means not primarily intended for or directed towards
|
126 |
+
commercial advantage or monetary compensation. For purposes of
|
127 |
+
this Public License, the exchange of the Licensed Material for
|
128 |
+
other material subject to Copyright and Similar Rights by digital
|
129 |
+
file-sharing or similar means is NonCommercial provided there is
|
130 |
+
no payment of monetary compensation in connection with the
|
131 |
+
exchange.
|
132 |
+
|
133 |
+
l. Share means to provide material to the public by any means or
|
134 |
+
process that requires permission under the Licensed Rights, such
|
135 |
+
as reproduction, public display, public performance, distribution,
|
136 |
+
dissemination, communication, or importation, and to make material
|
137 |
+
available to the public including in ways that members of the
|
138 |
+
public may access the material from a place and at a time
|
139 |
+
individually chosen by them.
|
140 |
+
|
141 |
+
m. Sui Generis Database Rights means rights other than copyright
|
142 |
+
resulting from Directive 96/9/EC of the European Parliament and of
|
143 |
+
the Council of 11 March 1996 on the legal protection of databases,
|
144 |
+
as amended and/or succeeded, as well as other essentially
|
145 |
+
equivalent rights anywhere in the world.
|
146 |
+
|
147 |
+
n. You means the individual or entity exercising the Licensed Rights
|
148 |
+
under this Public License. Your has a corresponding meaning.
|
149 |
+
|
150 |
+
|
151 |
+
Section 2 -- Scope.
|
152 |
+
|
153 |
+
a. License grant.
|
154 |
+
|
155 |
+
1. Subject to the terms and conditions of this Public License,
|
156 |
+
the Licensor hereby grants You a worldwide, royalty-free,
|
157 |
+
non-sublicensable, non-exclusive, irrevocable license to
|
158 |
+
exercise the Licensed Rights in the Licensed Material to:
|
159 |
+
|
160 |
+
a. reproduce and Share the Licensed Material, in whole or
|
161 |
+
in part, for NonCommercial purposes only; and
|
162 |
+
|
163 |
+
b. produce, reproduce, and Share Adapted Material for
|
164 |
+
NonCommercial purposes only.
|
165 |
+
|
166 |
+
2. Exceptions and Limitations. For the avoidance of doubt, where
|
167 |
+
Exceptions and Limitations apply to Your use, this Public
|
168 |
+
License does not apply, and You do not need to comply with
|
169 |
+
its terms and conditions.
|
170 |
+
|
171 |
+
3. Term. The term of this Public License is specified in Section
|
172 |
+
6(a).
|
173 |
+
|
174 |
+
4. Media and formats; technical modifications allowed. The
|
175 |
+
Licensor authorizes You to exercise the Licensed Rights in
|
176 |
+
all media and formats whether now known or hereafter created,
|
177 |
+
and to make technical modifications necessary to do so. The
|
178 |
+
Licensor waives and/or agrees not to assert any right or
|
179 |
+
authority to forbid You from making technical modifications
|
180 |
+
necessary to exercise the Licensed Rights, including
|
181 |
+
technical modifications necessary to circumvent Effective
|
182 |
+
Technological Measures. For purposes of this Public License,
|
183 |
+
simply making modifications authorized by this Section 2(a)
|
184 |
+
(4) never produces Adapted Material.
|
185 |
+
|
186 |
+
5. Downstream recipients.
|
187 |
+
|
188 |
+
a. Offer from the Licensor -- Licensed Material. Every
|
189 |
+
recipient of the Licensed Material automatically
|
190 |
+
receives an offer from the Licensor to exercise the
|
191 |
+
Licensed Rights under the terms and conditions of this
|
192 |
+
Public License.
|
193 |
+
|
194 |
+
b. Additional offer from the Licensor -- Adapted Material.
|
195 |
+
Every recipient of Adapted Material from You
|
196 |
+
automatically receives an offer from the Licensor to
|
197 |
+
exercise the Licensed Rights in the Adapted Material
|
198 |
+
under the conditions of the Adapter's License You apply.
|
199 |
+
|
200 |
+
c. No downstream restrictions. You may not offer or impose
|
201 |
+
any additional or different terms or conditions on, or
|
202 |
+
apply any Effective Technological Measures to, the
|
203 |
+
Licensed Material if doing so restricts exercise of the
|
204 |
+
Licensed Rights by any recipient of the Licensed
|
205 |
+
Material.
|
206 |
+
|
207 |
+
6. No endorsement. Nothing in this Public License constitutes or
|
208 |
+
may be construed as permission to assert or imply that You
|
209 |
+
are, or that Your use of the Licensed Material is, connected
|
210 |
+
with, or sponsored, endorsed, or granted official status by,
|
211 |
+
the Licensor or others designated to receive attribution as
|
212 |
+
provided in Section 3(a)(1)(A)(i).
|
213 |
+
|
214 |
+
b. Other rights.
|
215 |
+
|
216 |
+
1. Moral rights, such as the right of integrity, are not
|
217 |
+
licensed under this Public License, nor are publicity,
|
218 |
+
privacy, and/or other similar personality rights; however, to
|
219 |
+
the extent possible, the Licensor waives and/or agrees not to
|
220 |
+
assert any such rights held by the Licensor to the limited
|
221 |
+
extent necessary to allow You to exercise the Licensed
|
222 |
+
Rights, but not otherwise.
|
223 |
+
|
224 |
+
2. Patent and trademark rights are not licensed under this
|
225 |
+
Public License.
|
226 |
+
|
227 |
+
3. To the extent possible, the Licensor waives any right to
|
228 |
+
collect royalties from You for the exercise of the Licensed
|
229 |
+
Rights, whether directly or through a collecting society
|
230 |
+
under any voluntary or waivable statutory or compulsory
|
231 |
+
licensing scheme. In all other cases the Licensor expressly
|
232 |
+
reserves any right to collect such royalties, including when
|
233 |
+
the Licensed Material is used other than for NonCommercial
|
234 |
+
purposes.
|
235 |
+
|
236 |
+
|
237 |
+
Section 3 -- License Conditions.
|
238 |
+
|
239 |
+
Your exercise of the Licensed Rights is expressly made subject to the
|
240 |
+
following conditions.
|
241 |
+
|
242 |
+
a. Attribution.
|
243 |
+
|
244 |
+
1. If You Share the Licensed Material (including in modified
|
245 |
+
form), You must:
|
246 |
+
|
247 |
+
a. retain the following if it is supplied by the Licensor
|
248 |
+
with the Licensed Material:
|
249 |
+
|
250 |
+
i. identification of the creator(s) of the Licensed
|
251 |
+
Material and any others designated to receive
|
252 |
+
attribution, in any reasonable manner requested by
|
253 |
+
the Licensor (including by pseudonym if
|
254 |
+
designated);
|
255 |
+
|
256 |
+
ii. a copyright notice;
|
257 |
+
|
258 |
+
iii. a notice that refers to this Public License;
|
259 |
+
|
260 |
+
iv. a notice that refers to the disclaimer of
|
261 |
+
warranties;
|
262 |
+
|
263 |
+
v. a URI or hyperlink to the Licensed Material to the
|
264 |
+
extent reasonably practicable;
|
265 |
+
|
266 |
+
b. indicate if You modified the Licensed Material and
|
267 |
+
retain an indication of any previous modifications; and
|
268 |
+
|
269 |
+
c. indicate the Licensed Material is licensed under this
|
270 |
+
Public License, and include the text of, or the URI or
|
271 |
+
hyperlink to, this Public License.
|
272 |
+
|
273 |
+
2. You may satisfy the conditions in Section 3(a)(1) in any
|
274 |
+
reasonable manner based on the medium, means, and context in
|
275 |
+
which You Share the Licensed Material. For example, it may be
|
276 |
+
reasonable to satisfy the conditions by providing a URI or
|
277 |
+
hyperlink to a resource that includes the required
|
278 |
+
information.
|
279 |
+
3. If requested by the Licensor, You must remove any of the
|
280 |
+
information required by Section 3(a)(1)(A) to the extent
|
281 |
+
reasonably practicable.
|
282 |
+
|
283 |
+
b. ShareAlike.
|
284 |
+
|
285 |
+
In addition to the conditions in Section 3(a), if You Share
|
286 |
+
Adapted Material You produce, the following conditions also apply.
|
287 |
+
|
288 |
+
1. The Adapter's License You apply must be a Creative Commons
|
289 |
+
license with the same License Elements, this version or
|
290 |
+
later, or a BY-NC-SA Compatible License.
|
291 |
+
|
292 |
+
2. You must include the text of, or the URI or hyperlink to, the
|
293 |
+
Adapter's License You apply. You may satisfy this condition
|
294 |
+
in any reasonable manner based on the medium, means, and
|
295 |
+
context in which You Share Adapted Material.
|
296 |
+
|
297 |
+
3. You may not offer or impose any additional or different terms
|
298 |
+
or conditions on, or apply any Effective Technological
|
299 |
+
Measures to, Adapted Material that restrict exercise of the
|
300 |
+
rights granted under the Adapter's License You apply.
|
301 |
+
|
302 |
+
|
303 |
+
Section 4 -- Sui Generis Database Rights.
|
304 |
+
|
305 |
+
Where the Licensed Rights include Sui Generis Database Rights that
|
306 |
+
apply to Your use of the Licensed Material:
|
307 |
+
|
308 |
+
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
|
309 |
+
to extract, reuse, reproduce, and Share all or a substantial
|
310 |
+
portion of the contents of the database for NonCommercial purposes
|
311 |
+
only;
|
312 |
+
|
313 |
+
b. if You include all or a substantial portion of the database
|
314 |
+
contents in a database in which You have Sui Generis Database
|
315 |
+
Rights, then the database in which You have Sui Generis Database
|
316 |
+
Rights (but not its individual contents) is Adapted Material,
|
317 |
+
including for purposes of Section 3(b); and
|
318 |
+
|
319 |
+
c. You must comply with the conditions in Section 3(a) if You Share
|
320 |
+
all or a substantial portion of the contents of the database.
|
321 |
+
|
322 |
+
For the avoidance of doubt, this Section 4 supplements and does not
|
323 |
+
replace Your obligations under this Public License where the Licensed
|
324 |
+
Rights include other Copyright and Similar Rights.
|
325 |
+
|
326 |
+
|
327 |
+
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
|
328 |
+
|
329 |
+
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
|
330 |
+
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
|
331 |
+
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
|
332 |
+
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
|
333 |
+
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
|
334 |
+
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
|
335 |
+
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
|
336 |
+
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
|
337 |
+
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
|
338 |
+
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
|
339 |
+
|
340 |
+
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
|
341 |
+
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
|
342 |
+
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
|
343 |
+
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
|
344 |
+
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
|
345 |
+
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
|
346 |
+
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
|
347 |
+
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
|
348 |
+
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
|
349 |
+
|
350 |
+
c. The disclaimer of warranties and limitation of liability provided
|
351 |
+
above shall be interpreted in a manner that, to the extent
|
352 |
+
possible, most closely approximates an absolute disclaimer and
|
353 |
+
waiver of all liability.
|
354 |
+
|
355 |
+
|
356 |
+
Section 6 -- Term and Termination.
|
357 |
+
|
358 |
+
a. This Public License applies for the term of the Copyright and
|
359 |
+
Similar Rights licensed here. However, if You fail to comply with
|
360 |
+
this Public License, then Your rights under this Public License
|
361 |
+
terminate automatically.
|
362 |
+
|
363 |
+
b. Where Your right to use the Licensed Material has terminated under
|
364 |
+
Section 6(a), it reinstates:
|
365 |
+
|
366 |
+
1. automatically as of the date the violation is cured, provided
|
367 |
+
it is cured within 30 days of Your discovery of the
|
368 |
+
violation; or
|
369 |
+
|
370 |
+
2. upon express reinstatement by the Licensor.
|
371 |
+
|
372 |
+
For the avoidance of doubt, this Section 6(b) does not affect any
|
373 |
+
right the Licensor may have to seek remedies for Your violations
|
374 |
+
of this Public License.
|
375 |
+
|
376 |
+
c. For the avoidance of doubt, the Licensor may also offer the
|
377 |
+
Licensed Material under separate terms or conditions or stop
|
378 |
+
distributing the Licensed Material at any time; however, doing so
|
379 |
+
will not terminate this Public License.
|
380 |
+
|
381 |
+
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
|
382 |
+
License.
|
383 |
+
|
384 |
+
|
385 |
+
Section 7 -- Other Terms and Conditions.
|
386 |
+
|
387 |
+
a. The Licensor shall not be bound by any additional or different
|
388 |
+
terms or conditions communicated by You unless expressly agreed.
|
389 |
+
|
390 |
+
b. Any arrangements, understandings, or agreements regarding the
|
391 |
+
Licensed Material not stated herein are separate from and
|
392 |
+
independent of the terms and conditions of this Public License.
|
393 |
+
|
394 |
+
|
395 |
+
Section 8 -- Interpretation.
|
396 |
+
|
397 |
+
a. For the avoidance of doubt, this Public License does not, and
|
398 |
+
shall not be interpreted to, reduce, limit, restrict, or impose
|
399 |
+
conditions on any use of the Licensed Material that could lawfully
|
400 |
+
be made without permission under this Public License.
|
401 |
+
|
402 |
+
b. To the extent possible, if any provision of this Public License is
|
403 |
+
deemed unenforceable, it shall be automatically reformed to the
|
404 |
+
minimum extent necessary to make it enforceable. If the provision
|
405 |
+
cannot be reformed, it shall be severed from this Public License
|
406 |
+
without affecting the enforceability of the remaining terms and
|
407 |
+
conditions.
|
408 |
+
|
409 |
+
c. No term or condition of this Public License will be waived and no
|
410 |
+
failure to comply consented to unless expressly agreed to by the
|
411 |
+
Licensor.
|
412 |
+
|
413 |
+
d. Nothing in this Public License constitutes or may be interpreted
|
414 |
+
as a limitation upon, or waiver of, any privileges and immunities
|
415 |
+
that apply to the Licensor or You, including from the legal
|
416 |
+
processes of any jurisdiction or authority.
|
417 |
+
|
418 |
+
=======================================================================
|
419 |
+
|
420 |
+
Creative Commons is not a party to its public
|
421 |
+
licenses. Notwithstanding, Creative Commons may elect to apply one of
|
422 |
+
its public licenses to material it publishes and in those instances
|
423 |
+
will be considered the “Licensor.” The text of the Creative Commons
|
424 |
+
public licenses is dedicated to the public domain under the CC0 Public
|
425 |
+
Domain Dedication. Except for the limited purpose of indicating that
|
426 |
+
material is shared under a Creative Commons public license or as
|
427 |
+
otherwise permitted by the Creative Commons policies published at
|
428 |
+
creativecommons.org/policies, Creative Commons does not authorize the
|
429 |
+
use of the trademark "Creative Commons" or any other trademark or logo
|
430 |
+
of Creative Commons without its prior written consent including,
|
431 |
+
without limitation, in connection with any unauthorized modifications
|
432 |
+
to any of its public licenses or any other arrangements,
|
433 |
+
understandings, or agreements concerning use of licensed material. For
|
434 |
+
the avoidance of doubt, this paragraph does not form part of the
|
435 |
+
public licenses.
|
436 |
+
|
437 |
+
Creative Commons may be contacted at creativecommons.org.
|
emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/README.md
ADDED
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
datasets:
|
4 |
+
- msp-podcast
|
5 |
+
inference: true
|
6 |
+
tags:
|
7 |
+
- speech
|
8 |
+
- audio
|
9 |
+
- wav2vec2
|
10 |
+
- audio-classification
|
11 |
+
- emotion-recognition
|
12 |
+
license: cc-by-nc-sa-4.0
|
13 |
+
pipeline_tag: audio-classification
|
14 |
+
---
|
15 |
+
|
16 |
+
# Model for Dimensional Speech Emotion Recognition based on Wav2vec 2.0
|
17 |
+
|
18 |
+
The model expects a raw audio signal as input and outputs predictions for arousal, dominance and valence in a range of approximately 0...1. In addition, it also provides the pooled states of the last transformer layer. The model was created by fine-tuning [
|
19 |
+
Wav2Vec2-Large-Robust](https://huggingface.co/facebook/wav2vec2-large-robust) on [MSP-Podcast](https://ecs.utdallas.edu/research/researchlabs/msp-lab/MSP-Podcast.html) (v1.7). The model was pruned from 24 to 12 transformer layers before fine-tuning. An [ONNX](https://onnx.ai/") export of the model is available from [doi:10.5281/zenodo.6221127](https://zenodo.org/record/6221127). Further details are given in the associated [paper](https://arxiv.org/abs/2203.07378) and [tutorial](https://github.com/audeering/w2v2-how-to).
|
20 |
+
|
21 |
+
# Usage
|
22 |
+
|
23 |
+
```python
|
24 |
+
import numpy as np
|
25 |
+
import torch
|
26 |
+
import torch.nn as nn
|
27 |
+
from transformers import Wav2Vec2Processor
|
28 |
+
from transformers.models.wav2vec2.modeling_wav2vec2 import (
|
29 |
+
Wav2Vec2Model,
|
30 |
+
Wav2Vec2PreTrainedModel,
|
31 |
+
)
|
32 |
+
|
33 |
+
|
34 |
+
class RegressionHead(nn.Module):
|
35 |
+
r"""Classification head."""
|
36 |
+
|
37 |
+
def __init__(self, config):
|
38 |
+
|
39 |
+
super().__init__()
|
40 |
+
|
41 |
+
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
|
42 |
+
self.dropout = nn.Dropout(config.final_dropout)
|
43 |
+
self.out_proj = nn.Linear(config.hidden_size, config.num_labels)
|
44 |
+
|
45 |
+
def forward(self, features, **kwargs):
|
46 |
+
|
47 |
+
x = features
|
48 |
+
x = self.dropout(x)
|
49 |
+
x = self.dense(x)
|
50 |
+
x = torch.tanh(x)
|
51 |
+
x = self.dropout(x)
|
52 |
+
x = self.out_proj(x)
|
53 |
+
|
54 |
+
return x
|
55 |
+
|
56 |
+
|
57 |
+
class EmotionModel(Wav2Vec2PreTrainedModel):
|
58 |
+
r"""Speech emotion classifier."""
|
59 |
+
|
60 |
+
def __init__(self, config):
|
61 |
+
|
62 |
+
super().__init__(config)
|
63 |
+
|
64 |
+
self.config = config
|
65 |
+
self.wav2vec2 = Wav2Vec2Model(config)
|
66 |
+
self.classifier = RegressionHead(config)
|
67 |
+
self.init_weights()
|
68 |
+
|
69 |
+
def forward(
|
70 |
+
self,
|
71 |
+
input_values,
|
72 |
+
):
|
73 |
+
|
74 |
+
outputs = self.wav2vec2(input_values)
|
75 |
+
hidden_states = outputs[0]
|
76 |
+
hidden_states = torch.mean(hidden_states, dim=1)
|
77 |
+
logits = self.classifier(hidden_states)
|
78 |
+
|
79 |
+
return hidden_states, logits
|
80 |
+
|
81 |
+
|
82 |
+
|
83 |
+
# load model from hub
|
84 |
+
device = 'cpu'
|
85 |
+
model_name = 'audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim'
|
86 |
+
processor = Wav2Vec2Processor.from_pretrained(model_name)
|
87 |
+
model = EmotionModel.from_pretrained(model_name)
|
88 |
+
|
89 |
+
# dummy signal
|
90 |
+
sampling_rate = 16000
|
91 |
+
signal = np.zeros((1, sampling_rate), dtype=np.float32)
|
92 |
+
|
93 |
+
|
94 |
+
def process_func(
|
95 |
+
x: np.ndarray,
|
96 |
+
sampling_rate: int,
|
97 |
+
embeddings: bool = False,
|
98 |
+
) -> np.ndarray:
|
99 |
+
r"""Predict emotions or extract embeddings from raw audio signal."""
|
100 |
+
|
101 |
+
# run through processor to normalize signal
|
102 |
+
# always returns a batch, so we just get the first entry
|
103 |
+
# then we put it on the device
|
104 |
+
y = processor(x, sampling_rate=sampling_rate)
|
105 |
+
y = y['input_values'][0]
|
106 |
+
y = y.reshape(1, -1)
|
107 |
+
y = torch.from_numpy(y).to(device)
|
108 |
+
|
109 |
+
# run through model
|
110 |
+
with torch.no_grad():
|
111 |
+
y = model(y)[0 if embeddings else 1]
|
112 |
+
|
113 |
+
# convert to numpy
|
114 |
+
y = y.detach().cpu().numpy()
|
115 |
+
|
116 |
+
return y
|
117 |
+
|
118 |
+
|
119 |
+
print(process_func(signal, sampling_rate))
|
120 |
+
# Arousal dominance valence
|
121 |
+
# [[0.5460754 0.6062266 0.40431657]]
|
122 |
+
|
123 |
+
print(process_func(signal, sampling_rate, embeddings=True))
|
124 |
+
# Pooled hidden states of last transformer layer
|
125 |
+
# [[-0.00752167 0.0065819 -0.00746342 ... 0.00663632 0.00848748
|
126 |
+
# 0.00599211]]
|
127 |
+
```
|
emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/config.json
ADDED
@@ -0,0 +1,122 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "torch",
|
3 |
+
"activation_dropout": 0.1,
|
4 |
+
"adapter_kernel_size": 3,
|
5 |
+
"adapter_stride": 2,
|
6 |
+
"add_adapter": false,
|
7 |
+
"apply_spec_augment": true,
|
8 |
+
"architectures": [
|
9 |
+
"Wav2Vec2ForSpeechClassification"
|
10 |
+
],
|
11 |
+
"attention_dropout": 0.1,
|
12 |
+
"bos_token_id": 1,
|
13 |
+
"classifier_proj_size": 256,
|
14 |
+
"codevector_dim": 768,
|
15 |
+
"contrastive_logits_temperature": 0.1,
|
16 |
+
"conv_bias": true,
|
17 |
+
"conv_dim": [
|
18 |
+
512,
|
19 |
+
512,
|
20 |
+
512,
|
21 |
+
512,
|
22 |
+
512,
|
23 |
+
512,
|
24 |
+
512
|
25 |
+
],
|
26 |
+
"conv_kernel": [
|
27 |
+
10,
|
28 |
+
3,
|
29 |
+
3,
|
30 |
+
3,
|
31 |
+
3,
|
32 |
+
2,
|
33 |
+
2
|
34 |
+
],
|
35 |
+
"conv_stride": [
|
36 |
+
5,
|
37 |
+
2,
|
38 |
+
2,
|
39 |
+
2,
|
40 |
+
2,
|
41 |
+
2,
|
42 |
+
2
|
43 |
+
],
|
44 |
+
"ctc_loss_reduction": "sum",
|
45 |
+
"ctc_zero_infinity": false,
|
46 |
+
"diversity_loss_weight": 0.1,
|
47 |
+
"do_stable_layer_norm": true,
|
48 |
+
"eos_token_id": 2,
|
49 |
+
"feat_extract_activation": "gelu",
|
50 |
+
"feat_extract_dropout": 0.0,
|
51 |
+
"feat_extract_norm": "layer",
|
52 |
+
"feat_proj_dropout": 0.1,
|
53 |
+
"feat_quantizer_dropout": 0.0,
|
54 |
+
"final_dropout": 0.1,
|
55 |
+
"finetuning_task": "wav2vec2_reg",
|
56 |
+
"gradient_checkpointing": false,
|
57 |
+
"hidden_act": "gelu",
|
58 |
+
"hidden_dropout": 0.1,
|
59 |
+
"hidden_dropout_prob": 0.1,
|
60 |
+
"hidden_size": 1024,
|
61 |
+
"id2label": {
|
62 |
+
"0": "arousal",
|
63 |
+
"1": "dominance",
|
64 |
+
"2": "valence"
|
65 |
+
},
|
66 |
+
"initializer_range": 0.02,
|
67 |
+
"intermediate_size": 4096,
|
68 |
+
"label2id": {
|
69 |
+
"arousal": 0,
|
70 |
+
"dominance": 1,
|
71 |
+
"valence": 2
|
72 |
+
},
|
73 |
+
"layer_norm_eps": 1e-05,
|
74 |
+
"layerdrop": 0.1,
|
75 |
+
"mask_feature_length": 10,
|
76 |
+
"mask_feature_min_masks": 0,
|
77 |
+
"mask_feature_prob": 0.0,
|
78 |
+
"mask_time_length": 10,
|
79 |
+
"mask_time_min_masks": 2,
|
80 |
+
"mask_time_prob": 0.05,
|
81 |
+
"model_type": "wav2vec2",
|
82 |
+
"num_adapter_layers": 3,
|
83 |
+
"num_attention_heads": 16,
|
84 |
+
"num_codevector_groups": 2,
|
85 |
+
"num_codevectors_per_group": 320,
|
86 |
+
"num_conv_pos_embedding_groups": 16,
|
87 |
+
"num_conv_pos_embeddings": 128,
|
88 |
+
"num_feat_extract_layers": 7,
|
89 |
+
"num_hidden_layers": 12,
|
90 |
+
"num_negatives": 100,
|
91 |
+
"output_hidden_size": 1024,
|
92 |
+
"pad_token_id": 0,
|
93 |
+
"pooling_mode": "mean",
|
94 |
+
"problem_type": "regression",
|
95 |
+
"proj_codevector_dim": 768,
|
96 |
+
"tdnn_dilation": [
|
97 |
+
1,
|
98 |
+
2,
|
99 |
+
3,
|
100 |
+
1,
|
101 |
+
1
|
102 |
+
],
|
103 |
+
"tdnn_dim": [
|
104 |
+
512,
|
105 |
+
512,
|
106 |
+
512,
|
107 |
+
512,
|
108 |
+
1500
|
109 |
+
],
|
110 |
+
"tdnn_kernel": [
|
111 |
+
5,
|
112 |
+
3,
|
113 |
+
3,
|
114 |
+
1,
|
115 |
+
1
|
116 |
+
],
|
117 |
+
"torch_dtype": "float32",
|
118 |
+
"transformers_version": "4.17.0.dev0",
|
119 |
+
"use_weighted_layer_sum": false,
|
120 |
+
"vocab_size": null,
|
121 |
+
"xvector_output_dim": 512
|
122 |
+
}
|
emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/preprocessor_config.json
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"do_normalize": true,
|
3 |
+
"feature_extractor_type": "Wav2Vec2FeatureExtractor",
|
4 |
+
"feature_size": 1,
|
5 |
+
"padding_side": "right",
|
6 |
+
"padding_value": 0.0,
|
7 |
+
"return_attention_mask": true,
|
8 |
+
"sampling_rate": 16000
|
9 |
+
}
|
emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:176d9d1ce29a8bddbab44068b9c1c194c51624c7f1812905e01355da58b18816
|
3 |
+
size 661436013
|
emotional/wav2vec2-large-robust-12-ft-emotion-msp-dim/vocab.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{}
|
infer.py
CHANGED
@@ -6,16 +6,19 @@
|
|
6 |
特殊版本说明:
|
7 |
1.1.1-fix: 1.1.1版本训练的模型,但是在推理时使用dev的日语修复
|
8 |
1.1.1-dev: dev开发
|
9 |
-
2.
|
10 |
"""
|
11 |
import torch
|
12 |
import commons
|
13 |
from text import cleaned_text_to_sequence, get_bert
|
|
|
14 |
from text.cleaner import clean_text
|
15 |
import utils
|
16 |
|
17 |
from models import SynthesizerTrn
|
18 |
from text.symbols import symbols
|
|
|
|
|
19 |
from oldVersion.V111.models import SynthesizerTrn as V111SynthesizerTrn
|
20 |
from oldVersion.V111.text import symbols as V111symbols
|
21 |
from oldVersion.V110.models import SynthesizerTrn as V110SynthesizerTrn
|
@@ -23,13 +26,16 @@ from oldVersion.V110.text import symbols as V110symbols
|
|
23 |
from oldVersion.V101.models import SynthesizerTrn as V101SynthesizerTrn
|
24 |
from oldVersion.V101.text import symbols as V101symbols
|
25 |
|
26 |
-
from oldVersion import V111, V110, V101
|
27 |
|
28 |
# 当前版本信息
|
29 |
latest_version = "2.0"
|
30 |
|
31 |
# 版本兼容
|
32 |
SynthesizerTrnMap = {
|
|
|
|
|
|
|
33 |
"1.1.1-fix": V111SynthesizerTrn,
|
34 |
"1.1.1": V111SynthesizerTrn,
|
35 |
"1.1": V110SynthesizerTrn,
|
@@ -40,6 +46,9 @@ SynthesizerTrnMap = {
|
|
40 |
}
|
41 |
|
42 |
symbolsMap = {
|
|
|
|
|
|
|
43 |
"1.1.1-fix": V111symbols,
|
44 |
"1.1.1": V111symbols,
|
45 |
"1.1": V110symbols,
|
@@ -73,7 +82,7 @@ def get_net_g(model_path: str, version: str, device: str, hps):
|
|
73 |
return net_g
|
74 |
|
75 |
|
76 |
-
def get_text(text, language_str, hps, device):
|
77 |
# 在此处实现当前版本的get_text
|
78 |
norm_text, phone, tone, word2ph = clean_text(text, language_str)
|
79 |
phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str)
|
@@ -85,25 +94,31 @@ def get_text(text, language_str, hps, device):
|
|
85 |
for i in range(len(word2ph)):
|
86 |
word2ph[i] = word2ph[i] * 2
|
87 |
word2ph[0] += 1
|
88 |
-
|
89 |
del word2ph
|
90 |
-
assert
|
91 |
|
92 |
if language_str == "ZH":
|
93 |
-
bert =
|
94 |
ja_bert = torch.zeros(1024, len(phone))
|
95 |
en_bert = torch.zeros(1024, len(phone))
|
96 |
elif language_str == "JP":
|
97 |
bert = torch.zeros(1024, len(phone))
|
98 |
-
ja_bert =
|
99 |
en_bert = torch.zeros(1024, len(phone))
|
100 |
elif language_str == "EN":
|
101 |
bert = torch.zeros(1024, len(phone))
|
102 |
ja_bert = torch.zeros(1024, len(phone))
|
103 |
-
en_bert =
|
104 |
else:
|
105 |
raise ValueError("language_str should be ZH, JP or EN")
|
106 |
|
|
|
|
|
|
|
|
|
|
|
|
|
107 |
assert bert.shape[-1] == len(
|
108 |
phone
|
109 |
), f"Bert seq len {bert.shape[-1]} != {len(phone)}"
|
@@ -111,7 +126,7 @@ def get_text(text, language_str, hps, device):
|
|
111 |
phone = torch.LongTensor(phone)
|
112 |
tone = torch.LongTensor(tone)
|
113 |
language = torch.LongTensor(language)
|
114 |
-
return bert, ja_bert, en_bert, phone, tone, language
|
115 |
|
116 |
|
117 |
def infer(
|
@@ -125,9 +140,16 @@ def infer(
|
|
125 |
hps,
|
126 |
net_g,
|
127 |
device,
|
|
|
|
|
|
|
|
|
128 |
):
|
129 |
-
#
|
130 |
inferMap_V2 = {
|
|
|
|
|
|
|
131 |
"1.1.1-fix": V111.infer_fix,
|
132 |
"1.1.1": V111.infer,
|
133 |
"1.1": V110.infer,
|
@@ -169,9 +191,122 @@ def infer(
|
|
169 |
device,
|
170 |
)
|
171 |
# 在此处实现当前版本的推理
|
172 |
-
bert, ja_bert, en_bert, phones, tones, lang_ids = get_text(
|
173 |
-
text, language, hps, device
|
174 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
175 |
with torch.no_grad():
|
176 |
x_tst = phones.to(device).unsqueeze(0)
|
177 |
tones = tones.to(device).unsqueeze(0)
|
@@ -179,6 +314,7 @@ def infer(
|
|
179 |
bert = bert.to(device).unsqueeze(0)
|
180 |
ja_bert = ja_bert.to(device).unsqueeze(0)
|
181 |
en_bert = en_bert.to(device).unsqueeze(0)
|
|
|
182 |
x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device)
|
183 |
del phones
|
184 |
speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device)
|
@@ -192,6 +328,7 @@ def infer(
|
|
192 |
bert,
|
193 |
ja_bert,
|
194 |
en_bert,
|
|
|
195 |
sdp_ratio=sdp_ratio,
|
196 |
noise_scale=noise_scale,
|
197 |
noise_scale_w=noise_scale_w,
|
@@ -201,5 +338,7 @@ def infer(
|
|
201 |
.float()
|
202 |
.numpy()
|
203 |
)
|
204 |
-
del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers, ja_bert, en_bert
|
|
|
|
|
205 |
return audio
|
|
|
6 |
特殊版本说明:
|
7 |
1.1.1-fix: 1.1.1版本训练的模型,但是在推理时使用dev的日语修复
|
8 |
1.1.1-dev: dev开发
|
9 |
+
2.1:当前版本
|
10 |
"""
|
11 |
import torch
|
12 |
import commons
|
13 |
from text import cleaned_text_to_sequence, get_bert
|
14 |
+
from emo_gen import get_emo
|
15 |
from text.cleaner import clean_text
|
16 |
import utils
|
17 |
|
18 |
from models import SynthesizerTrn
|
19 |
from text.symbols import symbols
|
20 |
+
from oldVersion.V200.models import SynthesizerTrn as V200SynthesizerTrn
|
21 |
+
from oldVersion.V200.text import symbols as V200symbols
|
22 |
from oldVersion.V111.models import SynthesizerTrn as V111SynthesizerTrn
|
23 |
from oldVersion.V111.text import symbols as V111symbols
|
24 |
from oldVersion.V110.models import SynthesizerTrn as V110SynthesizerTrn
|
|
|
26 |
from oldVersion.V101.models import SynthesizerTrn as V101SynthesizerTrn
|
27 |
from oldVersion.V101.text import symbols as V101symbols
|
28 |
|
29 |
+
from oldVersion import V111, V110, V101, V200
|
30 |
|
31 |
# 当前版本信息
|
32 |
latest_version = "2.0"
|
33 |
|
34 |
# 版本兼容
|
35 |
SynthesizerTrnMap = {
|
36 |
+
"2.0.2-fix": V200SynthesizerTrn,
|
37 |
+
"2.0.1": V200SynthesizerTrn,
|
38 |
+
"2.0": V200SynthesizerTrn,
|
39 |
"1.1.1-fix": V111SynthesizerTrn,
|
40 |
"1.1.1": V111SynthesizerTrn,
|
41 |
"1.1": V110SynthesizerTrn,
|
|
|
46 |
}
|
47 |
|
48 |
symbolsMap = {
|
49 |
+
"2.0.2-fix": V200symbols,
|
50 |
+
"2.0.1": V200symbols,
|
51 |
+
"2.0": V200symbols,
|
52 |
"1.1.1-fix": V111symbols,
|
53 |
"1.1.1": V111symbols,
|
54 |
"1.1": V110symbols,
|
|
|
82 |
return net_g
|
83 |
|
84 |
|
85 |
+
def get_text(text, reference_audio, emotion, language_str, hps, device):
|
86 |
# 在此处实现当前版本的get_text
|
87 |
norm_text, phone, tone, word2ph = clean_text(text, language_str)
|
88 |
phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str)
|
|
|
94 |
for i in range(len(word2ph)):
|
95 |
word2ph[i] = word2ph[i] * 2
|
96 |
word2ph[0] += 1
|
97 |
+
bert_ori = get_bert(norm_text, word2ph, language_str, device)
|
98 |
del word2ph
|
99 |
+
assert bert_ori.shape[-1] == len(phone), phone
|
100 |
|
101 |
if language_str == "ZH":
|
102 |
+
bert = bert_ori
|
103 |
ja_bert = torch.zeros(1024, len(phone))
|
104 |
en_bert = torch.zeros(1024, len(phone))
|
105 |
elif language_str == "JP":
|
106 |
bert = torch.zeros(1024, len(phone))
|
107 |
+
ja_bert = bert_ori
|
108 |
en_bert = torch.zeros(1024, len(phone))
|
109 |
elif language_str == "EN":
|
110 |
bert = torch.zeros(1024, len(phone))
|
111 |
ja_bert = torch.zeros(1024, len(phone))
|
112 |
+
en_bert = bert_ori
|
113 |
else:
|
114 |
raise ValueError("language_str should be ZH, JP or EN")
|
115 |
|
116 |
+
emo = (
|
117 |
+
torch.from_numpy(get_emo(reference_audio))
|
118 |
+
if reference_audio
|
119 |
+
else torch.Tensor([emotion])
|
120 |
+
)
|
121 |
+
|
122 |
assert bert.shape[-1] == len(
|
123 |
phone
|
124 |
), f"Bert seq len {bert.shape[-1]} != {len(phone)}"
|
|
|
126 |
phone = torch.LongTensor(phone)
|
127 |
tone = torch.LongTensor(tone)
|
128 |
language = torch.LongTensor(language)
|
129 |
+
return bert, ja_bert, en_bert, emo, phone, tone, language
|
130 |
|
131 |
|
132 |
def infer(
|
|
|
140 |
hps,
|
141 |
net_g,
|
142 |
device,
|
143 |
+
reference_audio=None,
|
144 |
+
emotion=None,
|
145 |
+
skip_start=False,
|
146 |
+
skip_end=False,
|
147 |
):
|
148 |
+
# 支持中日英三语版本
|
149 |
inferMap_V2 = {
|
150 |
+
"2.0.2-fix": V200.infer,
|
151 |
+
"2.0.1": V200.infer,
|
152 |
+
"2.0": V200.infer,
|
153 |
"1.1.1-fix": V111.infer_fix,
|
154 |
"1.1.1": V111.infer,
|
155 |
"1.1": V110.infer,
|
|
|
191 |
device,
|
192 |
)
|
193 |
# 在此处实现当前版本的推理
|
194 |
+
bert, ja_bert, en_bert, emo, phones, tones, lang_ids = get_text(
|
195 |
+
text, reference_audio, emotion, language, hps, device
|
196 |
)
|
197 |
+
if skip_start:
|
198 |
+
phones = phones[1:]
|
199 |
+
tones = tones[1:]
|
200 |
+
lang_ids = lang_ids[1:]
|
201 |
+
bert = bert[:, 1:]
|
202 |
+
ja_bert = ja_bert[:, 1:]
|
203 |
+
en_bert = en_bert[:, 1:]
|
204 |
+
if skip_end:
|
205 |
+
phones = phones[:-1]
|
206 |
+
tones = tones[:-1]
|
207 |
+
lang_ids = lang_ids[:-1]
|
208 |
+
bert = bert[:, :-1]
|
209 |
+
ja_bert = ja_bert[:, :-1]
|
210 |
+
en_bert = en_bert[:, :-1]
|
211 |
+
with torch.no_grad():
|
212 |
+
x_tst = phones.to(device).unsqueeze(0)
|
213 |
+
tones = tones.to(device).unsqueeze(0)
|
214 |
+
lang_ids = lang_ids.to(device).unsqueeze(0)
|
215 |
+
bert = bert.to(device).unsqueeze(0)
|
216 |
+
ja_bert = ja_bert.to(device).unsqueeze(0)
|
217 |
+
en_bert = en_bert.to(device).unsqueeze(0)
|
218 |
+
x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device)
|
219 |
+
emo = emo.to(device).unsqueeze(0)
|
220 |
+
del phones
|
221 |
+
speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device)
|
222 |
+
audio = (
|
223 |
+
net_g.infer(
|
224 |
+
x_tst,
|
225 |
+
x_tst_lengths,
|
226 |
+
speakers,
|
227 |
+
tones,
|
228 |
+
lang_ids,
|
229 |
+
bert,
|
230 |
+
ja_bert,
|
231 |
+
en_bert,
|
232 |
+
emo,
|
233 |
+
sdp_ratio=sdp_ratio,
|
234 |
+
noise_scale=noise_scale,
|
235 |
+
noise_scale_w=noise_scale_w,
|
236 |
+
length_scale=length_scale,
|
237 |
+
)[0][0, 0]
|
238 |
+
.data.cpu()
|
239 |
+
.float()
|
240 |
+
.numpy()
|
241 |
+
)
|
242 |
+
del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers, ja_bert, en_bert, emo
|
243 |
+
if torch.cuda.is_available():
|
244 |
+
torch.cuda.empty_cache()
|
245 |
+
return audio
|
246 |
+
|
247 |
+
|
248 |
+
def infer_multilang(
|
249 |
+
text,
|
250 |
+
sdp_ratio,
|
251 |
+
noise_scale,
|
252 |
+
noise_scale_w,
|
253 |
+
length_scale,
|
254 |
+
sid,
|
255 |
+
language,
|
256 |
+
hps,
|
257 |
+
net_g,
|
258 |
+
device,
|
259 |
+
reference_audio=None,
|
260 |
+
emotion=None,
|
261 |
+
skip_start=False,
|
262 |
+
skip_end=False,
|
263 |
+
):
|
264 |
+
bert, ja_bert, en_bert, emo, phones, tones, lang_ids = [], [], [], [], [], [], []
|
265 |
+
# bert, ja_bert, en_bert, phones, tones, lang_ids = get_text(
|
266 |
+
# text, language, hps, device
|
267 |
+
# )
|
268 |
+
for idx, (txt, lang) in enumerate(zip(text, language)):
|
269 |
+
skip_start = (idx != 0) or (skip_start and idx == 0)
|
270 |
+
skip_end = (idx != len(text) - 1) or (skip_end and idx == len(text) - 1)
|
271 |
+
(
|
272 |
+
temp_bert,
|
273 |
+
temp_ja_bert,
|
274 |
+
temp_en_bert,
|
275 |
+
temp_emo,
|
276 |
+
temp_phones,
|
277 |
+
temp_tones,
|
278 |
+
temp_lang_ids,
|
279 |
+
) = get_text(txt, ref, emotion, language, hps, device)
|
280 |
+
if skip_start:
|
281 |
+
temp_bert = temp_bert[:, 1:]
|
282 |
+
temp_ja_bert = temp_ja_bert[:, 1:]
|
283 |
+
temp_en_bert = temp_en_bert[:, 1:]
|
284 |
+
temp_emo = temp_emo[:, 1:]
|
285 |
+
temp_phones = temp_phones[1:]
|
286 |
+
temp_tones = temp_tones[1:]
|
287 |
+
temp_lang_ids = temp_lang_ids[1:]
|
288 |
+
if skip_end:
|
289 |
+
temp_bert = temp_bert[:, :-1]
|
290 |
+
temp_ja_bert = temp_ja_bert[:, :-1]
|
291 |
+
temp_en_bert = temp_en_bert[:, :-1]
|
292 |
+
temp_emo = temp_emo[:, :-1]
|
293 |
+
temp_phones = temp_phones[:-1]
|
294 |
+
temp_tones = temp_tones[:-1]
|
295 |
+
temp_lang_ids = temp_lang_ids[:-1]
|
296 |
+
bert.append(temp_bert)
|
297 |
+
ja_bert.append(temp_ja_bert)
|
298 |
+
en_bert.append(temp_en_bert)
|
299 |
+
emo.append(temp_emo)
|
300 |
+
phones.append(temp_phones)
|
301 |
+
tones.append(temp_tones)
|
302 |
+
lang_ids.append(temp_lang_ids)
|
303 |
+
bert = torch.concatenate(bert, dim=1)
|
304 |
+
ja_bert = torch.concatenate(ja_bert, dim=1)
|
305 |
+
en_bert = torch.concatenate(en_bert, dim=1)
|
306 |
+
emo = torch.concatenate(emo, dim=1)
|
307 |
+
phones = torch.concatenate(phones, dim=0)
|
308 |
+
tones = torch.concatenate(tones, dim=0)
|
309 |
+
lang_ids = torch.concatenate(lang_ids, dim=0)
|
310 |
with torch.no_grad():
|
311 |
x_tst = phones.to(device).unsqueeze(0)
|
312 |
tones = tones.to(device).unsqueeze(0)
|
|
|
314 |
bert = bert.to(device).unsqueeze(0)
|
315 |
ja_bert = ja_bert.to(device).unsqueeze(0)
|
316 |
en_bert = en_bert.to(device).unsqueeze(0)
|
317 |
+
emo = emo.to(device).unsqueeze(0)
|
318 |
x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device)
|
319 |
del phones
|
320 |
speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device)
|
|
|
328 |
bert,
|
329 |
ja_bert,
|
330 |
en_bert,
|
331 |
+
emo,
|
332 |
sdp_ratio=sdp_ratio,
|
333 |
noise_scale=noise_scale,
|
334 |
noise_scale_w=noise_scale_w,
|
|
|
338 |
.float()
|
339 |
.numpy()
|
340 |
)
|
341 |
+
del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers, ja_bert, en_bert, emo
|
342 |
+
if torch.cuda.is_available():
|
343 |
+
torch.cuda.empty_cache()
|
344 |
return audio
|
models.py
CHANGED
@@ -10,6 +10,7 @@ import monotonic_align
|
|
10 |
|
11 |
from torch.nn import Conv1d, ConvTranspose1d, Conv2d
|
12 |
from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
|
|
|
13 |
|
14 |
from commons import init_weights, get_padding
|
15 |
from text import symbols, num_tones, num_languages
|
@@ -321,6 +322,7 @@ class TextEncoder(nn.Module):
|
|
321 |
n_layers,
|
322 |
kernel_size,
|
323 |
p_dropout,
|
|
|
324 |
gin_channels=0,
|
325 |
):
|
326 |
super().__init__()
|
@@ -342,6 +344,18 @@ class TextEncoder(nn.Module):
|
|
342 |
self.bert_proj = nn.Conv1d(1024, hidden_channels, 1)
|
343 |
self.ja_bert_proj = nn.Conv1d(1024, hidden_channels, 1)
|
344 |
self.en_bert_proj = nn.Conv1d(1024, hidden_channels, 1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
345 |
|
346 |
self.encoder = attentions.Encoder(
|
347 |
hidden_channels,
|
@@ -354,10 +368,33 @@ class TextEncoder(nn.Module):
|
|
354 |
)
|
355 |
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
356 |
|
357 |
-
def forward(
|
|
|
|
|
|
|
358 |
bert_emb = self.bert_proj(bert).transpose(1, 2)
|
359 |
ja_bert_emb = self.ja_bert_proj(ja_bert).transpose(1, 2)
|
360 |
en_bert_emb = self.en_bert_proj(en_bert).transpose(1, 2)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
361 |
x = (
|
362 |
self.emb(x)
|
363 |
+ self.tone_emb(tone)
|
@@ -365,6 +402,7 @@ class TextEncoder(nn.Module):
|
|
365 |
+ bert_emb
|
366 |
+ ja_bert_emb
|
367 |
+ en_bert_emb
|
|
|
368 |
) * math.sqrt(
|
369 |
self.hidden_channels
|
370 |
) # [b, t, h]
|
@@ -377,7 +415,7 @@ class TextEncoder(nn.Module):
|
|
377 |
stats = self.proj(x) * x_mask
|
378 |
|
379 |
m, logs = torch.split(stats, self.out_channels, dim=1)
|
380 |
-
return x, m, logs, x_mask
|
381 |
|
382 |
|
383 |
class ResidualCouplingBlock(nn.Module):
|
@@ -810,6 +848,7 @@ class SynthesizerTrn(nn.Module):
|
|
810 |
n_layers,
|
811 |
kernel_size,
|
812 |
p_dropout,
|
|
|
813 |
gin_channels=self.enc_gin_channels,
|
814 |
)
|
815 |
self.dec = Generator(
|
@@ -877,13 +916,14 @@ class SynthesizerTrn(nn.Module):
|
|
877 |
bert,
|
878 |
ja_bert,
|
879 |
en_bert,
|
|
|
880 |
):
|
881 |
if self.n_speakers > 0:
|
882 |
g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
|
883 |
else:
|
884 |
g = self.ref_enc(y.transpose(1, 2)).unsqueeze(-1)
|
885 |
-
x, m_p, logs_p, x_mask = self.enc_p(
|
886 |
-
x, x_lengths, tone, language, bert, ja_bert, en_bert, g=g
|
887 |
)
|
888 |
z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
|
889 |
z_p = self.flow(z, y_mask, g=g)
|
@@ -949,6 +989,7 @@ class SynthesizerTrn(nn.Module):
|
|
949 |
y_mask,
|
950 |
(z, z_p, m_p, logs_p, m_q, logs_q),
|
951 |
(x, logw, logw_),
|
|
|
952 |
)
|
953 |
|
954 |
def infer(
|
@@ -961,6 +1002,7 @@ class SynthesizerTrn(nn.Module):
|
|
961 |
bert,
|
962 |
ja_bert,
|
963 |
en_bert,
|
|
|
964 |
noise_scale=0.667,
|
965 |
length_scale=1,
|
966 |
noise_scale_w=0.8,
|
@@ -974,8 +1016,8 @@ class SynthesizerTrn(nn.Module):
|
|
974 |
g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
|
975 |
else:
|
976 |
g = self.ref_enc(y.transpose(1, 2)).unsqueeze(-1)
|
977 |
-
x, m_p, logs_p, x_mask = self.enc_p(
|
978 |
-
x, x_lengths, tone, language, bert, ja_bert, en_bert, g=g
|
979 |
)
|
980 |
logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (
|
981 |
sdp_ratio
|
|
|
10 |
|
11 |
from torch.nn import Conv1d, ConvTranspose1d, Conv2d
|
12 |
from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
|
13 |
+
from vector_quantize_pytorch import VectorQuantize
|
14 |
|
15 |
from commons import init_weights, get_padding
|
16 |
from text import symbols, num_tones, num_languages
|
|
|
322 |
n_layers,
|
323 |
kernel_size,
|
324 |
p_dropout,
|
325 |
+
n_speakers,
|
326 |
gin_channels=0,
|
327 |
):
|
328 |
super().__init__()
|
|
|
344 |
self.bert_proj = nn.Conv1d(1024, hidden_channels, 1)
|
345 |
self.ja_bert_proj = nn.Conv1d(1024, hidden_channels, 1)
|
346 |
self.en_bert_proj = nn.Conv1d(1024, hidden_channels, 1)
|
347 |
+
self.emo_proj = nn.Linear(1024, 1024)
|
348 |
+
self.emo_quantizer = [
|
349 |
+
VectorQuantize(
|
350 |
+
dim=1024,
|
351 |
+
codebook_size=10,
|
352 |
+
decay=0.8,
|
353 |
+
commitment_weight=1.0,
|
354 |
+
learnable_codebook=True,
|
355 |
+
ema_update=False,
|
356 |
+
)
|
357 |
+
] * n_speakers
|
358 |
+
self.emo_q_proj = nn.Linear(1024, hidden_channels)
|
359 |
|
360 |
self.encoder = attentions.Encoder(
|
361 |
hidden_channels,
|
|
|
368 |
)
|
369 |
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
370 |
|
371 |
+
def forward(
|
372 |
+
self, x, x_lengths, tone, language, bert, ja_bert, en_bert, emo, sid, g=None
|
373 |
+
):
|
374 |
+
sid = sid.cpu()
|
375 |
bert_emb = self.bert_proj(bert).transpose(1, 2)
|
376 |
ja_bert_emb = self.ja_bert_proj(ja_bert).transpose(1, 2)
|
377 |
en_bert_emb = self.en_bert_proj(en_bert).transpose(1, 2)
|
378 |
+
if emo.size(-1) == 1024:
|
379 |
+
emo_emb = self.emo_proj(emo.unsqueeze(1))
|
380 |
+
emo_commit_loss = torch.zeros(1)
|
381 |
+
emo_emb_ = []
|
382 |
+
for i in range(emo_emb.size(0)):
|
383 |
+
temp_emo_emb, _, temp_emo_commit_loss = self.emo_quantizer[sid[i]](
|
384 |
+
emo_emb[i].unsqueeze(0).cpu()
|
385 |
+
)
|
386 |
+
emo_commit_loss += temp_emo_commit_loss
|
387 |
+
emo_emb_.append(temp_emo_emb)
|
388 |
+
emo_emb = torch.cat(emo_emb_, dim=0).to(emo_emb.device)
|
389 |
+
emo_commit_loss = emo_commit_loss.to(emo_emb.device)
|
390 |
+
else:
|
391 |
+
emo_emb = (
|
392 |
+
self.emo_quantizer[sid[0]]
|
393 |
+
.get_output_from_indices(emo.to(torch.int).cpu())
|
394 |
+
.unsqueeze(0)
|
395 |
+
.to(emo.device)
|
396 |
+
)
|
397 |
+
emo_commit_loss = torch.zeros(1)
|
398 |
x = (
|
399 |
self.emb(x)
|
400 |
+ self.tone_emb(tone)
|
|
|
402 |
+ bert_emb
|
403 |
+ ja_bert_emb
|
404 |
+ en_bert_emb
|
405 |
+
+ self.emo_q_proj(emo_emb)
|
406 |
) * math.sqrt(
|
407 |
self.hidden_channels
|
408 |
) # [b, t, h]
|
|
|
415 |
stats = self.proj(x) * x_mask
|
416 |
|
417 |
m, logs = torch.split(stats, self.out_channels, dim=1)
|
418 |
+
return x, m, logs, x_mask, emo_commit_loss
|
419 |
|
420 |
|
421 |
class ResidualCouplingBlock(nn.Module):
|
|
|
848 |
n_layers,
|
849 |
kernel_size,
|
850 |
p_dropout,
|
851 |
+
self.n_speakers,
|
852 |
gin_channels=self.enc_gin_channels,
|
853 |
)
|
854 |
self.dec = Generator(
|
|
|
916 |
bert,
|
917 |
ja_bert,
|
918 |
en_bert,
|
919 |
+
emo=None,
|
920 |
):
|
921 |
if self.n_speakers > 0:
|
922 |
g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
|
923 |
else:
|
924 |
g = self.ref_enc(y.transpose(1, 2)).unsqueeze(-1)
|
925 |
+
x, m_p, logs_p, x_mask, loss_commit = self.enc_p(
|
926 |
+
x, x_lengths, tone, language, bert, ja_bert, en_bert, emo, sid, g=g
|
927 |
)
|
928 |
z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
|
929 |
z_p = self.flow(z, y_mask, g=g)
|
|
|
989 |
y_mask,
|
990 |
(z, z_p, m_p, logs_p, m_q, logs_q),
|
991 |
(x, logw, logw_),
|
992 |
+
loss_commit,
|
993 |
)
|
994 |
|
995 |
def infer(
|
|
|
1002 |
bert,
|
1003 |
ja_bert,
|
1004 |
en_bert,
|
1005 |
+
emo=None,
|
1006 |
noise_scale=0.667,
|
1007 |
length_scale=1,
|
1008 |
noise_scale_w=0.8,
|
|
|
1016 |
g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
|
1017 |
else:
|
1018 |
g = self.ref_enc(y.transpose(1, 2)).unsqueeze(-1)
|
1019 |
+
x, m_p, logs_p, x_mask, _ = self.enc_p(
|
1020 |
+
x, x_lengths, tone, language, bert, ja_bert, en_bert, emo, sid, g=g
|
1021 |
)
|
1022 |
logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (
|
1023 |
sdp_ratio
|
oldVersion/V101/__init__.py
CHANGED
@@ -70,4 +70,6 @@ def infer(
|
|
70 |
.numpy()
|
71 |
)
|
72 |
del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers
|
|
|
|
|
73 |
return audio
|
|
|
70 |
.numpy()
|
71 |
)
|
72 |
del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers
|
73 |
+
if torch.cuda.is_available():
|
74 |
+
torch.cuda.empty_cache()
|
75 |
return audio
|
oldVersion/V101/__pycache__/__init__.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V101/__pycache__/__init__.cpython-39.pyc and b/oldVersion/V101/__pycache__/__init__.cpython-39.pyc differ
|
|
oldVersion/V101/__pycache__/models.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V101/__pycache__/models.cpython-39.pyc and b/oldVersion/V101/__pycache__/models.cpython-39.pyc differ
|
|
oldVersion/V101/text/__pycache__/__init__.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V101/text/__pycache__/__init__.cpython-39.pyc and b/oldVersion/V101/text/__pycache__/__init__.cpython-39.pyc differ
|
|
oldVersion/V101/text/__pycache__/chinese.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V101/text/__pycache__/chinese.cpython-39.pyc and b/oldVersion/V101/text/__pycache__/chinese.cpython-39.pyc differ
|
|
oldVersion/V101/text/__pycache__/cleaner.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V101/text/__pycache__/cleaner.cpython-39.pyc and b/oldVersion/V101/text/__pycache__/cleaner.cpython-39.pyc differ
|
|
oldVersion/V101/text/__pycache__/symbols.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V101/text/__pycache__/symbols.cpython-39.pyc and b/oldVersion/V101/text/__pycache__/symbols.cpython-39.pyc differ
|
|
oldVersion/V101/text/__pycache__/tone_sandhi.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V101/text/__pycache__/tone_sandhi.cpython-39.pyc and b/oldVersion/V101/text/__pycache__/tone_sandhi.cpython-39.pyc differ
|
|
oldVersion/V110/__pycache__/__init__.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V110/__pycache__/__init__.cpython-39.pyc and b/oldVersion/V110/__pycache__/__init__.cpython-39.pyc differ
|
|
oldVersion/V110/__pycache__/models.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V110/__pycache__/models.cpython-39.pyc and b/oldVersion/V110/__pycache__/models.cpython-39.pyc differ
|
|
oldVersion/V110/text/__pycache__/__init__.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V110/text/__pycache__/__init__.cpython-39.pyc and b/oldVersion/V110/text/__pycache__/__init__.cpython-39.pyc differ
|
|
oldVersion/V110/text/__pycache__/chinese.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V110/text/__pycache__/chinese.cpython-39.pyc and b/oldVersion/V110/text/__pycache__/chinese.cpython-39.pyc differ
|
|
oldVersion/V110/text/__pycache__/cleaner.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V110/text/__pycache__/cleaner.cpython-39.pyc and b/oldVersion/V110/text/__pycache__/cleaner.cpython-39.pyc differ
|
|
oldVersion/V110/text/__pycache__/japanese.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V110/text/__pycache__/japanese.cpython-39.pyc and b/oldVersion/V110/text/__pycache__/japanese.cpython-39.pyc differ
|
|
oldVersion/V110/text/__pycache__/symbols.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V110/text/__pycache__/symbols.cpython-39.pyc and b/oldVersion/V110/text/__pycache__/symbols.cpython-39.pyc differ
|
|
oldVersion/V110/text/__pycache__/tone_sandhi.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V110/text/__pycache__/tone_sandhi.cpython-39.pyc and b/oldVersion/V110/text/__pycache__/tone_sandhi.cpython-39.pyc differ
|
|
oldVersion/V111/__pycache__/__init__.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V111/__pycache__/__init__.cpython-39.pyc and b/oldVersion/V111/__pycache__/__init__.cpython-39.pyc differ
|
|
oldVersion/V111/__pycache__/models.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V111/__pycache__/models.cpython-39.pyc and b/oldVersion/V111/__pycache__/models.cpython-39.pyc differ
|
|
oldVersion/V111/text/__pycache__/__init__.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V111/text/__pycache__/__init__.cpython-39.pyc and b/oldVersion/V111/text/__pycache__/__init__.cpython-39.pyc differ
|
|
oldVersion/V111/text/__pycache__/chinese.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V111/text/__pycache__/chinese.cpython-39.pyc and b/oldVersion/V111/text/__pycache__/chinese.cpython-39.pyc differ
|
|
oldVersion/V111/text/__pycache__/cleaner.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V111/text/__pycache__/cleaner.cpython-39.pyc and b/oldVersion/V111/text/__pycache__/cleaner.cpython-39.pyc differ
|
|
oldVersion/V111/text/__pycache__/japanese.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V111/text/__pycache__/japanese.cpython-39.pyc and b/oldVersion/V111/text/__pycache__/japanese.cpython-39.pyc differ
|
|
oldVersion/V111/text/__pycache__/symbols.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V111/text/__pycache__/symbols.cpython-39.pyc and b/oldVersion/V111/text/__pycache__/symbols.cpython-39.pyc differ
|
|
oldVersion/V111/text/__pycache__/tone_sandhi.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V111/text/__pycache__/tone_sandhi.cpython-39.pyc and b/oldVersion/V111/text/__pycache__/tone_sandhi.cpython-39.pyc differ
|
|
oldVersion/V111/text/fix/__pycache__/__init__.cpython-39.pyc
CHANGED
Binary files a/oldVersion/V111/text/fix/__pycache__/__init__.cpython-39.pyc and b/oldVersion/V111/text/fix/__pycache__/__init__.cpython-39.pyc differ
|
|