Update README.md
Browse files
README.md
CHANGED
@@ -493,12 +493,7 @@ In particular, we caution against using Whisper models to transcribe recordings
|
|
493 |
|
494 |
## Training Data
|
495 |
|
496 |
-
|
497 |
-
|
498 |
-
The large-v3 checkpoint is trained on 1 million hours of weakly labeled audio and 4 million hours of pseudo-labeled audio collected using Whisper large-v2.
|
499 |
-
|
500 |
-
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
|
501 |
-
|
502 |
|
503 |
## Performance and Limitations
|
504 |
|
|
|
493 |
|
494 |
## Training Data
|
495 |
|
496 |
+
No information provided.
|
|
|
|
|
|
|
|
|
|
|
497 |
|
498 |
## Performance and Limitations
|
499 |
|