Spaces:
Running
on
L40S
Running
on
L40S
File size: 8,780 Bytes
4f6613a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 |
{
"16-mixed is recommended for 10+ series GPU": "10+ ์๋ฆฌ์ฆ GPU์๋ 16-mixed๋ฅผ ๊ถ์ฅํฉ๋๋ค.",
"5 to 10 seconds of reference audio, useful for specifying speaker.": "ํ์๋ฅผ ํน์ ํ๋ ๋ฐ ์ ์๋ฏธํ 5~10์ด์ ๊ธธ์ด์ ์ฐธ์กฐ ์ค๋์ค ๋ฐ์ดํฐ.",
"A text-to-speech model based on VQ-GAN and Llama developed by [Fish Audio](https://fish.audio).": "[Fish Audio](https://fish.audio)์์ ๊ฐ๋ฐํ VQ-GAN ๋ฐ Llama ๊ธฐ๋ฐ์ ํ
์คํธ ์์ฑ ๋ณํ ๋ชจ๋ธ.",
"Accumulate Gradient Batches": "๊ทธ๋ผ๋์ธํธ ๋ฐฐ์น ๋์ ",
"Add to Processing Area": "์ฒ๋ฆฌ ์์ญ์ ์ถ๊ฐ",
"Added path successfully!": "๊ฒฝ๋ก๊ฐ ์ฑ๊ณต์ ์ผ๋ก ์ถ๊ฐ๋์์ต๋๋ค!",
"Advanced Config": "๊ณ ๊ธ ์ค์ ",
"Base LLAMA Model": "๊ธฐ๋ณธ LLAMA ๋ชจ๋ธ",
"Batch Inference": "๋ฐฐ์น ์ถ๋ก ",
"Batch Size": "๋ฐฐ์น ํฌ๊ธฐ",
"Changing with the Model Path": "๋ชจ๋ธ ๊ฒฝ๋ก์ ๋ฐ๋ผ ๋ณ๊ฒฝ ์ค",
"Chinese": "์ค๊ตญ์ด",
"Compile Model": "๋ชจ๋ธ ์ปดํ์ผ",
"Compile the model can significantly reduce the inference time, but will increase cold start time": "๋ชจ๋ธ์ ์ปดํ์ผํ๋ฉด ์ถ๋ก ์๊ฐ์ด ํฌ๊ฒ ์ค์ด๋ค์ง๋ง, ์ด๊ธฐ ์์ ์๊ฐ์ด ๊ธธ์ด์ง๋๋ค.",
"Copy": "๋ณต์ฌ",
"Data Preprocessing": "๋ฐ์ดํฐ ์ ์ฒ๋ฆฌ",
"Data Preprocessing Path": "๋ฐ์ดํฐ ์ ์ฒ๋ฆฌ ๊ฒฝ๋ก",
"Data Source": "๋ฐ์ดํฐ ์์ค",
"Decoder Model Config": "๋์ฝ๋ ๋ชจ๋ธ ์ค์ ",
"Decoder Model Path": "๋์ฝ๋ ๋ชจ๋ธ ๊ฒฝ๋ก",
"Disabled": "๋นํ์ฑํ ๋จ",
"Enable Reference Audio": "์ฐธ๊ณ ์์ฑ ํ์ฑํ",
"English": "์์ด",
"Error Message": "์ค๋ฅ ๋ฉ์์ง",
"File Preprocessing": "ํ์ผ ์ ์ฒ๋ฆฌ",
"Generate": "์์ฑ",
"Generated Audio": "์์ฑ๋ ์ค๋์ค",
"If there is no corresponding text for the audio, apply ASR for assistance, support .txt or .lab format": "์ค๋์ค์ ๋์ํ๋ ํ
์คํธ๊ฐ ์์ ๊ฒฝ์ฐ, ASR์ ์ ์ฉํด ์ง์ํ๋ฉฐ, .txt ๋๋ .lab ํ์์ ์ง์ํฉ๋๋ค.",
"Infer interface is closed": "์ถ๋ก ์ธํฐํ์ด์ค๊ฐ ๋ซํ์ต๋๋ค.",
"Inference Configuration": "์ถ๋ก ์ค์ ",
"Inference Server Configuration": "์ถ๋ก ์๋ฒ ์ค์ ",
"Inference Server Error": "์ถ๋ก ์๋ฒ ์ค๋ฅ",
"Inferring interface is launched at {}": "์ถ๋ก ์ธํฐํ์ด์ค๊ฐ {}์์ ์์๋์์ต๋๋ค.",
"Initial Learning Rate": "์ด๊ธฐ ํ์ต๋ฅ ",
"Input Audio & Source Path for Transcription": "์ ์ฌํ ์
๋ ฅ ์ค๋์ค ๋ฐ ์์ค ๊ฒฝ๋ก",
"Input Text": "์
๋ ฅ ํ
์คํธ",
"Invalid path: {}": "์ ํจํ์ง ์์ ๊ฒฝ๋ก: {}",
"It is recommended to use CUDA, if you have low configuration, use CPU": "CUDA ์ฌ์ฉ์ ๊ถ์ฅํ๋ฉฐ, ๋ฎ์ ์ฌ์์ผ ๊ฒฝ์ฐ CPU๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์ ๊ถ์ฅํฉ๋๋ค.",
"Iterative Prompt Length, 0 means off": "๋ฐ๋ณต ํ๋กฌํํธ ๊ธธ์ด. (0:๋นํ์ฑํ)",
"Japanese": "์ผ๋ณธ์ด",
"LLAMA Configuration": "LLAMA ์ค์ ",
"LLAMA Model Config": "LLAMA ๋ชจ๋ธ ์ค์ ",
"LLAMA Model Path": "LLAMA ๋ชจ๋ธ ๊ฒฝ๋ก",
"Labeling Device": "๋ผ๋ฒจ๋ง ์ฅ์น",
"LoRA Model to be merged": "๋ณํฉํ LoRA ๋ชจ๋ธ",
"Maximum Audio Duration": "์ต๋ ์ค๋์ค ๊ธธ์ด",
"Maximum Length per Sample": "์ํ๋น ์ต๋ ๊ธธ์ด",
"Maximum Training Steps": "์ต๋ ํ์ต ๋จ๊ณ",
"Maximum tokens per batch, 0 means no limit": "๋ฐฐ์น๋น ์ต๋ ํ ํฐ ์(0:์ ํ ์์)",
"Merge": "๋ณํฉ",
"Merge LoRA": "LoRA ๋ณํฉ",
"Merge successfully": "์ฑ๊ณต์ ์ผ๋ก ๋ณํฉ ๋์์ต๋๋ค.",
"Minimum Audio Duration": "์ต์ ์ค๋์ค ๊ธธ์ด",
"Model Output Path": "๋ชจ๋ธ ์ถ๋ ฅ ๊ฒฝ๋ก",
"Model Size": "๋ชจ๋ธ ํฌ๊ธฐ",
"Move": "์ด๋",
"Move files successfully": "ํ์ผ์ด ์ฑ๊ณต์ ์ผ๋ก ์ด๋๋์์ต๋๋ค.",
"No audio generated, please check the input text.": "์์ฑ๋ ์ค๋์ค๊ฐ ์์ต๋๋ค. ์
๋ ฅ๋ ํ
์คํธ๋ฅผ ํ์ธํ์ธ์.",
"No selected options": "์ต์
์ด ์ ํ๋์ง ์์์ต๋๋ค.",
"Number of Workers": "์์
์ ์",
"Open Inference Server": "์ถ๋ก ์๋ฒ ์ด๊ธฐ",
"Open Labeler WebUI": "๋ผ๋ฒจ๋ฌ WebUI ์ด๊ธฐ",
"Open Tensorboard": "Tensorboard ์ด๊ธฐ",
"Opened labeler in browser": "๋ธ๋ผ์ฐ์ ์์ ๋ผ๋ฒจ๋ฌ๊ฐ ์ด๋ ธ์ต๋๋ค.",
"Optional Label Language": "์ ํ์ ๋ผ๋ฒจ ์ธ์ด",
"Optional online ver": "์จ๋ผ์ธ ๋ฒ์ ์ ํ",
"Output Path": "์ถ๋ ฅ ๊ฒฝ๋ก",
"Path error, please check the model file exists in the corresponding path": "๊ฒฝ๋ก ์ค๋ฅ, ํด๋น ๊ฒฝ๋ก์ ๋ชจ๋ธ ํ์ผ์ด ์๋์ง ํ์ธํ์ญ์์ค.",
"Precision": "์ ๋ฐ๋",
"Probability of applying Speaker Condition": "ํ์ ์กฐ๊ฑด ์ ์ฉ ํ๋ฅ ",
"Put your text here.": "์ฌ๊ธฐ์ ํ
์คํธ๋ฅผ ์
๋ ฅํ์ธ์.",
"Reference Audio": "์ฐธ๊ณ ์ค๋์ค",
"Reference Text": "์ฐธ๊ณ ํ
์คํธ",
"Related code and weights are released under CC BY-NC-SA 4.0 License.": "๊ด๋ จ ์ฝ๋ ๋ฐ ๊ฐ์ค์น๋ CC BY-NC-SA 4.0 ๋ผ์ด์ ์ค ํ์ ๋ฐฐํฌ๋ฉ๋๋ค.",
"Remove Selected Data": "์ ํํ ๋ฐ์ดํฐ ์ ๊ฑฐ",
"Removed path successfully!": "๊ฒฝ๋ก๊ฐ ์ฑ๊ณต์ ์ผ๋ก ์ ๊ฑฐ๋์์ต๋๋ค!",
"Repetition Penalty": "๋ฐ๋ณต ํจ๋ํฐ",
"Save model every n steps": "n ๋จ๊ณ๋ง๋ค ๋ชจ๋ธ ์ ์ฅ",
"Select LLAMA ckpt": "LLAMA ckpt ์ ํ",
"Select VITS ckpt": "VITS ckpt ์ ํ",
"Select VQGAN ckpt": "VQGAN ckpt ์ ํ",
"Select source file processing method": "์์ค ํ์ผ ์ฒ๋ฆฌ ๋ฐฉ๋ฒ ์ ํ",
"Select the model to be trained (Depending on the Tab page you are on)": "ํ์ตํ ๋ชจ๋ธ ์ ํ(ํญ ํ์ด์ง์ ๋ฐ๋ผ ๋ค๋ฆ)",
"Selected: {}": "์ ํ๋จ: {}",
"Speaker": "ํ์",
"Speaker is identified by the folder name": "ํ์๋ ํด๋ ์ด๋ฆ์ผ๋ก ์๋ณ๋ฉ๋๋ค",
"Start Training": "ํ์ต ์์",
"Streaming Audio": "์คํธ๋ฆฌ๋ฐ ์ค๋์ค",
"Streaming Generate": "์คํธ๋ฆฌ๋ฐ ์์ฑ",
"Tensorboard Host": "Tensorboard ํธ์คํธ",
"Tensorboard Log Path": "Tensorboard ๋ก๊ทธ ๊ฒฝ๋ก",
"Tensorboard Port": "Tensorboard ํฌํธ",
"Tensorboard interface is closed": "Tensorboard ์ธํฐํ์ด์ค๊ฐ ๋ซํ์ต๋๋ค",
"Tensorboard interface is launched at {}": "Tensorboard ์ธํฐํ์ด์ค๊ฐ {}์์ ์์๋์์ต๋๋ค.",
"Text is too long, please keep it under {} characters.": "ํ
์คํธ๊ฐ ๋๋ฌด ๊น๋๋ค. {}์ ์ดํ๋ก ์
๋ ฅํด์ฃผ์ธ์.",
"The path of the input folder on the left or the filelist. Whether checked or not, it will be used for subsequent training in this list.": "์ผ์ชฝ์ ์
๋ ฅ ํด๋ ๊ฒฝ๋ก ๋๋ ํ์ผ ๋ชฉ๋ก์ ๊ฒฝ๋ก. ์ฒดํฌ ์ฌ๋ถ์ ๊ด๊ณ์์ด ์ด ๋ชฉ๋ก์์ ํ์ ํ์ต์ ์ฌ์ฉ๋ฉ๋๋ค.",
"Training Configuration": "ํ์ต ์ค์ ",
"Training Error": "ํ์ต ์ค๋ฅ",
"Training stopped": "ํ์ต์ด ์ค์ง๋์์ต๋๋ค.",
"Type name of the speaker": "ํ์์ ์ด๋ฆ์ ์
๋ ฅํ์ธ์.",
"Type the path or select from the dropdown": "๊ฒฝ๋ก๋ฅผ ์
๋ ฅํ๊ฑฐ๋ ๋๋กญ๋ค์ด์์ ์ ํํ์ธ์.",
"Use LoRA": "LoRA ์ฌ์ฉ",
"Use LoRA can save GPU memory, but may reduce the quality of the model": "LoRA๋ฅผ ์ฌ์ฉํ๋ฉด GPU ๋ฉ๋ชจ๋ฆฌ๋ฅผ ์ ์ฝํ ์ ์์ง๋ง, ๋ชจ๋ธ์ ํ์ง์ด ์ ํ๋ ์ ์์ต๋๋ค.",
"Use filelist": "ํ์ผ ๋ชฉ๋ก ์ฌ์ฉ",
"Use large for 10G+ GPU, medium for 5G, small for 2G": "10G+ GPU ํ๊ฒฝ์์ large, 5G์์ medium, 2G์์ small์ ์ฌ์ฉํ ๊ฒ์ ๊ถ์ฅํฉ๋๋ค.",
"VITS Configuration": "VITS ์ค์ ",
"VQGAN Configuration": "VQGAN ์ค์ ",
"Validation Batch Size": "๊ฒ์ฆ ๋ฐฐ์น ํฌ๊ธฐ",
"View the status of the preprocessing folder (use the slider to control the depth of the tree)": "์ ์ฒ๋ฆฌ ํด๋์ ์ํ๋ฅผ ํ์ธํฉ๋๋ค(์ฌ๋ผ์ด๋๋ฅผ ์ฌ์ฉํ์ฌ ํธ๋ฆฌ์ ๊น์ด๋ฅผ ์กฐ์ ํฉ๋๋ค)",
"We are not responsible for any misuse of the model, please consider your local laws and regulations before using it.": "๋ชจ๋ธ์ ์ค์ฉ์ ๋ํด ์ฑ
์์ง์ง ์์ต๋๋ค. ์ฌ์ฉํ๊ธฐ ์ ์ ํ์ง ๋ฒ๋ฅ ๊ณผ ๊ท์ ์ ๊ณ ๋ คํ์๊ธธ ๋ฐ๋๋๋ค.",
"WebUI Host": "WebUI ํธ์คํธ",
"WebUI Port": "WebUI ํฌํธ",
"Whisper Model": "Whisper ๋ชจ๋ธ",
"You can find the source code [here](https://github.com/fishaudio/fish-speech) and models [here](https://huggingface.co/fishaudio/fish-speech-1).": "์์ค ์ฝ๋๋ [์ด๊ณณ](https://github.com/fishaudio/fish-speech)์์, ๋ชจ๋ธ์ [์ด๊ณณ](https://huggingface.co/fishaudio/fish-speech-1)์์ ํ์ธํ์ค ์ ์์ต๋๋ค.",
"bf16-true is recommended for 30+ series GPU, 16-mixed is recommended for 10+ series GPU": "30+ ์๋ฆฌ์ฆ GPU์๋ bf16-true๋ฅผ, 10+ ์๋ฆฌ์ฆ GPU์๋ 16-mixed๋ฅผ ๊ถ์ฅํฉ๋๋ค",
"latest": "์ต์ ",
"new": "์๋ก์ด",
"Realtime Transform Text": "์ค์๊ฐ ํ
์คํธ ๋ณํ",
"Normalization Result Preview (Currently Only Chinese)": "์ ๊ทํ ๊ฒฐ๊ณผ ๋ฏธ๋ฆฌ๋ณด๊ธฐ(ํ์ฌ ์ค๊ตญ์ด๋ง ์ง์)",
"Text Normalization": "ํ
์คํธ ์ ๊ทํ",
"Select Example Audio": "์์ ์ค๋์ค ์ ํ"
}
|