--- license: mit datasets: - uonlp/CulturaX language: - tr pipeline_tag: text-generation tags: - Turkish - turkish - gpt2 --- # turkish-gpt2 This is a Turkish GPT-2 model. GPT-2 is designed for text generation tasks, providing the ability to continue a given text snippet in a coherent and contextually relevant manner. Due to the diverse nature of the training data, which includes websites, books, and other text sources, this model can exhibit biases. Users should be aware of these biases and use the model responsibly. ## Example Usage ```python from transformers import AutoTokenizer, GPT2LMHeadModel from transformers import pipeline model = GPT2LMHeadModel.from_pretrained("ytu-ce-cosmos/turkish-gpt2") tokenizer = AutoTokenizer.from_pretrained("ytu-ce-cosmos/turkish-gpt2") text_generator = pipeline('text-generation', model=model, tokenizer=tokenizer) r = text_generator("Teknolojinin gelişimi hayatımızı önemli ölçüde etkiledi. ", max_length=100) [{'generated_text': 'Teknolojinin gelişimi hayatımızı önemli ölçüde etkiledi. "Dijitalleşme" ile birlikte hayatımızın belirli bir parçası daha rahata ermeye başladı.'}] ``` # Acknowledgments - Research supported with Cloud TPUs from [Google's TensorFlow Research Cloud](https://sites.research.google/trc/about/) (TFRC). Thanks for providing access to the TFRC ❤️ - Thanks to the generous support from the Hugging Face team, it is possible to download models from their S3 storage 🤗 # Citation Paper coming soon 😊 ### Contact COSMOS AI Research Group, Yildiz Technical University Computer Engineering Department
https://cosmos.yildiz.edu.tr/
cosmos@yildiz.edu.tr
*Feel free to reach out to us to ask any questions about our models or collaborating with us.* 👋