albertvillanova's picture
Replace YAML keys from int to str (#2)
b4bd6f1
|
raw
history blame
5.05 kB
metadata
annotations_creators:
  - found
language_creators:
  - found
language:
  - pl
license:
  - unknown
multilinguality:
  - monolingual
size_categories:
  - 10K<n<100K
source_datasets:
  - original
task_categories:
  - text-classification
task_ids:
  - intent-classification
pretty_name: Poleval 2019 cyberbullying
dataset_info:
  - config_name: task01
    features:
      - name: text
        dtype: string
      - name: label
        dtype:
          class_label:
            names:
              '0': '0'
              '1': '1'
    splits:
      - name: train
        num_bytes: 1104322
        num_examples: 10041
      - name: test
        num_bytes: 109681
        num_examples: 1000
    download_size: 410001
    dataset_size: 1214003
  - config_name: task02
    features:
      - name: text
        dtype: string
      - name: label
        dtype:
          class_label:
            names:
              '0': '0'
              '1': '1'
              '2': '2'
    splits:
      - name: train
        num_bytes: 1104322
        num_examples: 10041
      - name: test
        num_bytes: 109681
        num_examples: 1000
    download_size: 410147
    dataset_size: 1214003

Dataset Card for Poleval 2019 cyberbullying

Table of Contents

Dataset Description

Dataset Summary

Task 6-1: Harmful vs non-harmful

In this task, the participants are to distinguish between normal/non-harmful tweets (class: 0) and tweets that contain any kind of harmful information (class: 1). This includes cyberbullying, hate speech and related phenomena. The data for the task is available now and can be downloaded from the link provided below.

Task 6-2: Type of harmfulness

In this task, the participants shall distinguish between three classes of tweets: 0 (non-harmful), 1 (cyberbullying), 2 (hate-speech). There are various definitions of both cyberbullying and hate-speech, some of them even putting those two phenomena in the same group. The specific conditions on which we based our annotations for both cyberbullying and hate-speech, which have been worked out during ten years of research will be summarized in an introductory paper for the task, however, the main and definitive condition to distinguish the two is whether the harmful action is addressed towards a private person(s) (cyberbullying), or a public person/entity/large group (hate-speech).

Supported Tasks and Leaderboards

[More Information Needed]

Languages

Polish

Dataset Structure

Data Instances

[More Information Needed]

Data Fields

  • text: the provided tweet
  • label: for task 6-1 the label can be 0 (non-harmful) or 1 (harmful) for task 6-2 the label can be 0 (non-harmful), 1 (cyberbullying) or 2 (hate-speech)

Data Splits

Train and Test

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

@proceedings{ogr:kob:19:poleval,
  editor    = {Maciej Ogrodniczuk and Łukasz Kobyliński},
  title     = {{Proceedings of the PolEval 2019 Workshop}},
  year      = {2019},
  address   = {Warsaw, Poland},
  publisher = {Institute of Computer Science, Polish Academy of Sciences},
  url       = {http://2019.poleval.pl/files/poleval2019.pdf},
  isbn      = "978-83-63159-28-3"}
}

Contributions

Thanks to @czabo for adding this dataset.