Edit model card

Model description

[More Information Needed]

Intended uses & limitations

[More Information Needed]

Training Procedure

[More Information Needed]

Hyperparameters

Click to expand
Hyperparameter Value
memory
steps [('transformer', MultiSkillTransformer()), ('clf', SVC(C=1, class_weight='balanced', kernel='linear'))]
verbose False
transformer MultiSkillTransformer()
clf SVC(C=1, class_weight='balanced', kernel='linear')
clf__C 1
clf__break_ties False
clf__cache_size 200
clf__class_weight balanced
clf__coef0 0.0
clf__decision_function_shape ovr
clf__degree 3
clf__gamma scale
clf__kernel linear
clf__max_iter -1
clf__probability False
clf__random_state
clf__shrinking True
clf__tol 0.001
clf__verbose False

Model Plot

Pipeline(steps=[('transformer', MultiSkillTransformer()),('clf', SVC(C=1, class_weight='balanced', kernel='linear'))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

Evaluation Results

[More Information Needed]

How to Get Started with the Model

[More Information Needed]

Model Card Authors

This model card is written by following authors:

[More Information Needed]

Model Card Contact

You can contact the model card authors through following channels: [More Information Needed]

Citation

Below you can find information related to citation.

BibTeX:

[More Information Needed]

model_description

Support Vector Machine (SVM) trained to predict if a skill span is a multiskill or not.

Classification Report

Click to expand
index precision recall f1-score support
SKILL 0.93617 0.807339 0.866995 109
MULTISKILL 0.834646 0.946429 0.887029 112
accuracy 0.877828 0.877828 0.877828 0.877828
macro avg 0.885408 0.876884 0.877012 221
weighted avg 0.884719 0.877828 0.877148 221
Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.