vibhorag101 commited on
Commit
4daac5e
1 Parent(s): 4b9e121

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -9
README.md CHANGED
@@ -11,6 +11,11 @@ metrics:
11
  model-index:
12
  - name: PHR_Suicide_Prediction_Roberta_Cleaned
13
  results: []
 
 
 
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -18,8 +23,8 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # PHR_Suicide_Prediction_Roberta_Cleaned
20
 
21
- This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
22
- It achieves the following results on the evaluation set:
23
  - Loss: 0.1543
24
  - Accuracy: {'accuracy': 0.9652972367116438}
25
  - Recall: {'recall': 0.966571403827834}
@@ -30,13 +35,10 @@ It achieves the following results on the evaluation set:
30
 
31
  More information needed
32
 
33
- ## Intended uses & limitations
34
-
35
- More information needed
36
-
37
  ## Training and evaluation data
38
-
39
- More information needed
 
40
 
41
  ## Training procedure
42
 
@@ -96,4 +98,4 @@ The following hyperparameters were used during training:
96
  - Transformers 4.31.0
97
  - Pytorch 2.1.0+cu121
98
  - Datasets 2.14.5
99
- - Tokenizers 0.13.3
 
11
  model-index:
12
  - name: PHR_Suicide_Prediction_Roberta_Cleaned
13
  results: []
14
+ datasets:
15
+ - vibhorag101/suicide_prediction_dataset_phr
16
+ language:
17
+ - en
18
+ library_name: transformers
19
  ---
20
 
21
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
23
 
24
  # PHR_Suicide_Prediction_Roberta_Cleaned
25
 
26
+ This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on a Suicide Prediction dataset sourced from Reddit.
27
+ It achieves the following results on the evaluation/validation set:
28
  - Loss: 0.1543
29
  - Accuracy: {'accuracy': 0.9652972367116438}
30
  - Recall: {'recall': 0.966571403827834}
 
35
 
36
  More information needed
37
 
 
 
 
 
38
  ## Training and evaluation data
39
+ The dataset is sourced from Reddit and is available on [Kaggle](https://www.kaggle.com/datasets/nikhileswarkomati/suicide-watch).
40
+ The dataset contains text with binary labels for suicide or non-suicide.
41
+ The evaluation set had ~23000 samples, while the training set had ~186k samples, i.e. 80:10:10 (train:test:val) split.
42
 
43
  ## Training procedure
44
 
 
98
  - Transformers 4.31.0
99
  - Pytorch 2.1.0+cu121
100
  - Datasets 2.14.5
101
+ - Tokenizers 0.13.3