--- language: - en tags: - autotrain - text-classification datasets: - davanstrien/autotrain-data-dataset-mentions widget: - text: ' frases-bertimbau-v0.4 This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.' - text: Model description BERTa is a transformer-based masked language model for the Catalan language. It is based on the [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) base model and has been trained on a medium-size corpus collected from publicly available corpora and crawlers - text: Model description More information needed co2_eq_emissions: emissions: 0.008999666562870793 base_model: neuralmind/bert-base-portuguese-cased --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 3390592983 - CO2 Emissions (in grams): 0.0090 ## Validation Metrics - Loss: 0.014 - Accuracy: 0.997 - Precision: 0.998 - Recall: 0.997 - AUC: 1.000 - F1: 0.998 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-dataset-mentions-3390592983 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-dataset-mentions-3390592983", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-dataset-mentions-3390592983", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```