mrfakename commited on
Commit
2669b3f
1 Parent(s): 9a3e0d5

Sync from GitHub repo

Browse files

This Space is synced from the GitHub repo: https://github.com/SWivid/F5-TTS. Please submit contributions to the Space there

Files changed (1) hide show
  1. README_REPO.md +2 -2
README_REPO.md CHANGED
@@ -130,7 +130,7 @@ Currently support 30s for a single generation, which is the **TOTAL** length of
130
 
131
  ### CLI Inference
132
 
133
- Either you can specify everything in `inference-cli.toml` or override with flags. Leave `--ref_text ""` will have ASR model transcribe the reference audio automatically (use extra GPU memory). If encounter network error, consider use local ckpt, just set `ckpt_path` in `inference-cli.py`
134
 
135
  for change model use `--ckpt_file` to specify the model you want to load,
136
  for change vocab.txt use `--vocab_file` to provide your vocab.txt file.
@@ -158,7 +158,7 @@ Currently supported features:
158
  - Podcast Generation
159
  - Multiple Speech-Type Generation
160
 
161
- You can launch a Gradio app (web interface) to launch a GUI for inference (will load ckpt from Huggingface, you may set `ckpt_path` to local file in `gradio_app.py`). Currently load ASR model, F5-TTS and E2 TTS all in once, thus use more GPU memory than `inference-cli`.
162
 
163
  ```bash
164
  python gradio_app.py
 
130
 
131
  ### CLI Inference
132
 
133
+ Either you can specify everything in `inference-cli.toml` or override with flags. Leave `--ref_text ""` will have ASR model transcribe the reference audio automatically (use extra GPU memory). If encounter network error, consider use local ckpt, just set `ckpt_file` in `inference-cli.py`
134
 
135
  for change model use `--ckpt_file` to specify the model you want to load,
136
  for change vocab.txt use `--vocab_file` to provide your vocab.txt file.
 
158
  - Podcast Generation
159
  - Multiple Speech-Type Generation
160
 
161
+ You can launch a Gradio app (web interface) to launch a GUI for inference (will load ckpt from Huggingface, you may also use local file in `gradio_app.py`). Currently load ASR model, F5-TTS and E2 TTS all in once, thus use more GPU memory than `inference-cli`.
162
 
163
  ```bash
164
  python gradio_app.py