--- license: apache-2.0 language: - tr metrics: - accuracy - f1 base_model: - burakaytan/roberta-base-turkish-uncased pipeline_tag: text-classification --- # byunal/roberta-base-turkish-uncased-stance ![Model card](https://huggingface.co/front/assets/huggingface_logo.svg) This repository contains a fine-tuned BERT model for stance detection in Turkish. The base model for this fine-tuning is [burakaytan/roberta-base-turkish-uncased](https://huggingface.co/burakaytan/roberta-base-turkish-uncased). The model has been specifically trained on a uniquely collected Turkish stance detection dataset. ## Model Description - **Model Name**: byunal/roberta-base-turkish-uncased-stance - **Base Model**: [burakaytan/roberta-base-turkish-uncased](https://huggingface.co/burakaytan/roberta-base-turkish-uncased) - **Task**: Stance Detection - **Language**: Turkish The model predicts the stance of a given text towards a specific target. Possible stance labels include: - **Favor**: The text supports the target - **Against**: The text opposes the target - **Neutral**: The text does not express a clear stance on the target ## Installation To install the necessary libraries and load the model, run: ```bash pip install transformers ``` ## Usage Here’s a simple example of how to use the model for stance detection in Turkish: ```bash from transformers import AutoModelForSequenceClassification, AutoTokenizer import torch # Load the model and tokenizer model_name = "byunal/roberta-base-turkish-uncased-stance" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) # Example text text = "Bu konu hakkında kesinlikle karşıyım." # Tokenize input inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True) # Perform prediction with torch.no_grad(): outputs = model(**inputs) # Get predicted stance predictions = torch.argmax(outputs.logits, dim=-1) stance_label = predictions.item() # Display result labels = ["Favor", "Against", "Neutral"] print(f"The stance is: {labels[stance_label]}") ``` ## Training This model was fine-tuned using a specialized Turkish stance detection dataset that uniquely reflects various text contexts and opinions. The dataset includes diverse examples from social media, news articles, and public comments, ensuring a robust understanding of stance detection in real-world applications. - Epochs: 10 - Batch Size: 32 - Learning Rate: 5e-5 - Optimizer: AdamW ## Evaluation The model was evaluated using Accuracy and Macro F1-score on a validation dataset. The results confirm the model's effectiveness in stance detection tasks in Turkish. - Accuracy Score: % 79.0 - Macro F1 Score: % 78.0