Edit model card

Kelemia for CWE Classification

This model is a fine-tuned version of RoBERTa for classifying Common Weakness Enumeration (CWE) vulnerabilities.

Try now the v0.2 : Dunateo/roberta-cwe-classifier-kelemia-v0.2

Model description

  • Model type: RoBERTa
  • Language(s): English
  • License: MIT
  • Finetuned from model: roberta-base

Intended uses & limitations

This model is intended for classifying software vulnerabilities according to the CWE standard. It should be used as part of a broader security analysis process and not as a standalone solution for identifying vulnerabilities.

Training and evaluation data

Dunateo/VulnDesc_CWE_Mapping

Example Usage

Here's an example of how to use this model for inference:

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

# Load model and tokenizer
model_name = "Dunateo/roberta-cwe-classifier-kelemia"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
model.eval()

# Prepare input text
text = "The application stores sensitive user data in plaintext."

# Tokenize and prepare input
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512)

# Perform inference
with torch.no_grad():
    outputs = model(**inputs)

# Get prediction
probabilities = torch.nn.functional.softmax(outputs.logits, dim=-1)
predicted_class = torch.argmax(probabilities, dim=-1).item()

print(f"Predicted CWE class: {predicted_class}")
print(f"Confidence: {probabilities[predicted_class].item():.4f}")

Label Dictionary

This model uses the following mapping for CWE classes:

{
  "0": "CWE-79",
  "1": "CWE-89",
  ...
}
import json
from huggingface_hub import hf_hub_download

label_dict_file = hf_hub_download(repo_id="Dunateo/roberta-cwe-classifier-kelemia", filename="label_dict.json")
with open(label_dict_file, 'r') as f:
    label_dict = json.load(f)

id2label = {v: k for k, v in label_dict.items()}

print(f"Label : {id2label[predicted_class]}")

Now you can use label_dict to map prediction indices to CWE classes

Training procedure

Training hyperparameters

  • Number of epochs: 3
  • Learning rate: Scheduled from 1e-06 to 3.9e-05
  • Batch size: 8
  • Weight decay: 0.01
  • Learning rate scheduler: 5e-5

Training results

  • Training Loss: 4.201853184822278 (final)
  • Validation Loss: 2.821094036102295 (final)
  • Training Time: 5893.2502 seconds (approximately 1 hour 38 minutes)
  • Samples per Second: 1.059
  • Steps per Second: 0.066

Loss progression

Epoch Training Loss Validation Loss
1.0 4.822 4.639444828
2.0 3.6549 3.355055332
3.0 3.0617 2.821094036

Evaluation results

The model shows consistent improvement over the training period:

  • Initial Training Loss: 5.5987
  • Final Training Loss: 3.0617
  • Initial Validation Loss: 4.639444828
  • Final Validation Loss: 2.821094036

Performance analysis

  • The model demonstrates a steady decrease in both training and validation loss, indicating good learning progress.
  • The final validation loss (2.82) being lower than the final training loss (3.06) suggests that the model generalizes well to unseen data.
  • There were two instances of gradient explosion (grad_norm of 603089.0625 and 68246.296875) early in training, but the model recovered and stabilized.

Ethical considerations

This model should be used responsibly as part of a comprehensive security strategy. It should not be relied upon as the sole method for identifying or classifying vulnerabilities. False positives and negatives are possible, and results should be verified by security professionals.

Additional information

For more details on the CWE standard, please visit Common Weakness Enumeration.

My report on this : Fine-tuning blogpost.

Downloads last month
57
Safetensors
Model size
125M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Dataset used to train Dunateo/roberta-cwe-classifier-kelemia