LLM-Powered Grapheme-to-Phoneme Conversion: Benchmark and Case Study
Abstract
Grapheme-to-phoneme (G2P) conversion is critical in speech processing, particularly for applications like speech synthesis. G2P systems must possess linguistic understanding and contextual awareness of languages with polyphone words and context-dependent phonemes. Large language models (LLMs) have recently demonstrated significant potential in various language tasks, suggesting that their phonetic knowledge could be leveraged for G2P. In this paper, we evaluate the performance of LLMs in G2P conversion and introduce prompting and post-processing methods that enhance LLM outputs without additional training or labeled data. We also present a benchmarking dataset designed to assess G2P performance on sentence-level phonetic challenges of the Persian language. Our results show that by applying the proposed methods, LLMs can outperform traditional G2P tools, even in an underrepresented language like Persian, highlighting the potential of developing LLM-aided G2P systems.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- EdaCSC: Two Easy Data Augmentation Methods for Chinese Spelling Correction (2024)
- Chain-of-Translation Prompting (CoTR): A Novel Prompting Technique for Low Resource Languages (2024)
- PRESENT: Zero-Shot Text-to-Prosody Control (2024)
- StyleSpeech: Parameter-efficient Fine Tuning for Pre-trained Controllable Text-to-Speech (2024)
- Towards interfacing large language models with ASR systems using confidence measures and prompting (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 2
Spaces citing this paper 0
No Space linking this paper