{ "16-mixed is recommended for 10+ series GPU": "16-mixed is recommended for 10+ series GPU", "5 to 10 seconds of reference audio, useful for specifying speaker.": "5 to 10 seconds of reference audio, useful for specifying speaker.", "A text-to-speech model based on VQ-GAN and Llama developed by [Fish Audio](https://fish.audio).": "A text-to-speech model based on VQ-GAN and Llama developed by [Fish Audio](https://fish.audio).", "Accumulate Gradient Batches": "Accumulate Gradient Batches", "Add to Processing Area": "Add to Processing Area", "Added path successfully!": "Added path successfully!", "Advanced Config": "Advanced Config", "Base LLAMA Model": "Base LLAMA Model", "Batch Inference": "Batch Inference", "Batch Size": "Batch Size", "Changing with the Model Path": "Changing with the Model Path", "Chinese": "Chinese", "Compile Model": "Compile Model", "Compile the model can significantly reduce the inference time, but will increase cold start time": "Compile the model can significantly reduce the inference time, but will increase cold start time", "Copy": "Copy", "Data Preprocessing": "Data Preprocessing", "Data Preprocessing Path": "Data Preprocessing Path", "Data Source": "Data Source", "Decoder Model Config": "Decoder Model Config", "Decoder Model Path": "Decoder Model Path", "Disabled": "Disabled", "Enable Reference Audio": "Enable Reference Audio", "English": "English", "Error Message": "Error Message", "File Preprocessing": "File Preprocessing", "Generate": "Generate", "Generated Audio": "Generated Audio", "If there is no corresponding text for the audio, apply ASR for assistance, support .txt or .lab format": "If there is no corresponding text for the audio, apply ASR for assistance, support .txt or .lab format", "Infer interface is closed": "Infer interface is closed", "Inference Configuration": "Inference Configuration", "Inference Server Configuration": "Inference Server Configuration", "Inference Server Error": "Inference Server Error", "Inferring interface is launched at {}": "Inferring interface is launched at {}", "Initial Learning Rate": "Initial Learning Rate", "Input Audio & Source Path for Transcription": "Input Audio & Source Path for Transcription", "Input Text": "Input Text", "Invalid path: {}": "Invalid path: {}", "It is recommended to use CUDA, if you have low configuration, use CPU": "It is recommended to use CUDA, if you have low configuration, use CPU", "Iterative Prompt Length, 0 means off": "Iterative Prompt Length, 0 means off", "Japanese": "Japanese", "LLAMA Configuration": "LLAMA Configuration", "LLAMA Model Config": "LLAMA Model Config", "LLAMA Model Path": "LLAMA Model Path", "Labeling Device": "Labeling Device", "LoRA Model to be merged": "LoRA Model to be merged", "Maximum Audio Duration": "Maximum Audio Duration", "Maximum Length per Sample": "Maximum Length per Sample", "Maximum Training Steps": "Maximum Training Steps", "Maximum tokens per batch, 0 means no limit": "Maximum tokens per batch, 0 means no limit", "Merge": "Merge", "Merge LoRA": "Merge LoRA", "Merge successfully": "Merge successfully", "Minimum Audio Duration": "Minimum Audio Duration", "Model Output Path": "Model Output Path", "Model Size": "Model Size", "Move": "Move", "Move files successfully": "Move files successfully", "No audio generated, please check the input text.": "No audio generated, please check the input text.", "No selected options": "No selected options", "Number of Workers": "Number of Workers", "Open Inference Server": "Open Inference Server", "Open Labeler WebUI": "Open Labeler WebUI", "Open Tensorboard": "Open Tensorboard", "Opened labeler in browser": "Opened labeler in browser", "Optional Label Language": "Optional Label Language", "Optional online ver": "Optional online ver", "Output Path": "Output Path", "Path error, please check the model file exists in the corresponding path": "Path error, please check the model file exists in the corresponding path", "Precision": "Precision", "Probability of applying Speaker Condition": "Probability of applying Speaker Condition", "Put your text here.": "Put your text here.", "Reference Audio": "Reference Audio", "Reference Text": "Reference Text", "Related code and weights are released under CC BY-NC-SA 4.0 License.": "Related code and weights are released under CC BY-NC-SA 4.0 License.", "Remove Selected Data": "Remove Selected Data", "Removed path successfully!": "Removed path successfully!", "Repetition Penalty": "Repetition Penalty", "Save model every n steps": "Save model every n steps", "Select LLAMA ckpt": "Select LLAMA ckpt", "Select VITS ckpt": "Select VITS ckpt", "Select VQGAN ckpt": "Select VQGAN ckpt", "Select source file processing method": "Select source file processing method", "Select the model to be trained (Depending on the Tab page you are on)": "Select the model to be trained (Depending on the Tab page you are on)", "Selected: {}": "Selected: {}", "Speaker": "Speaker", "Speaker is identified by the folder name": "Speaker is identified by the folder name", "Start Training": "Start Training", "Streaming Audio": "Streaming Audio", "Streaming Generate": "Streaming Generate", "Tensorboard Host": "Tensorboard Host", "Tensorboard Log Path": "Tensorboard Log Path", "Tensorboard Port": "Tensorboard Port", "Tensorboard interface is closed": "Tensorboard interface is closed", "Tensorboard interface is launched at {}": "Tensorboard interface is launched at {}", "Text is too long, please keep it under {} characters.": "Text is too long, please keep it under {} characters.", "The path of the input folder on the left or the filelist. Whether checked or not, it will be used for subsequent training in this list.": "The path of the input folder on the left or the filelist. Whether checked or not, it will be used for subsequent training in this list.", "Training Configuration": "Training Configuration", "Training Error": "Training Error", "Training stopped": "Training stopped", "Type name of the speaker": "Type name of the speaker", "Type the path or select from the dropdown": "Type the path or select from the dropdown", "Use LoRA": "Use LoRA", "Use LoRA can save GPU memory, but may reduce the quality of the model": "Use LoRA can save GPU memory, but may reduce the quality of the model", "Use filelist": "Use filelist", "Use large for 10G+ GPU, medium for 5G, small for 2G": "Use large for 10G+ GPU, medium for 5G, small for 2G", "VITS Configuration": "VITS Configuration", "VQGAN Configuration": "VQGAN Configuration", "Validation Batch Size": "Validation Batch Size", "View the status of the preprocessing folder (use the slider to control the depth of the tree)": "View the status of the preprocessing folder (use the slider to control the depth of the tree)", "We are not responsible for any misuse of the model, please consider your local laws and regulations before using it.": "We are not responsible for any misuse of the model, please consider your local laws and regulations before using it.", "WebUI Host": "WebUI Host", "WebUI Port": "WebUI Port", "Whisper Model": "Whisper Model", "You can find the source code [here](https://github.com/fishaudio/fish-speech) and models [here](https://huggingface.co/fishaudio/fish-speech-1).": "You can find the source code [here](https://github.com/fishaudio/fish-speech) and models [here](https://huggingface.co/fishaudio/fish-speech-1).", "bf16-true is recommended for 30+ series GPU, 16-mixed is recommended for 10+ series GPU": "bf16-true is recommended for 30+ series GPU, 16-mixed is recommended for 10+ series GPU", "latest": "latest", "new": "new", "Realtime Transform Text": "Realtime Transform Text", "Normalization Result Preview (Currently Only Chinese)": "Normalization Result Preview (Currently Only Chinese)", "Text Normalization": "Text Normalization", "Select Example Audio": "Select Example Audio" }