Legacy version of HSD2019-SHARED Task - Hate Speech Detection on Social Networks

Organized by hsd2019-organizers - Current server time: Sept. 20, 2021, 7:59 p.m. UTC

Previous

Public test
Jan. 1, 2021, midnight UTC

Current

Private Test
Jan. 1, 2021, midnight UTC

End

Competition Ends
Never

Our website

LEGACY version 

  • The original competition of VLSP2019 was closed.
  • Here we create this legacy version for participants who wish to continue their experiments on our Public & Private round datasets. 
  • General information :
    •     Timeline: Never ending, no deadlines
    •     Participant: Open for everyone
    •     Dataset: HSD Competition 2019 Public&Private round dataset
  • We would like to invite you all to participate in HSD 2019 Legacy. We hope that you have enjoyed HSD 2019 and find the open Legacy version helpful.

 

In this shared-task, participants are challenged to build a multi-class classification model that is capable of classifying an item to one of 3 classes (HATE, OFFENSIVE, CLEAN). You will be using a dataset of posts and/or comments from Facebook. A good classification model will hopefully help online discussion become more productive and respectful.

References

1] Zhang, Z., Luo, L.: Hate speech detection: A solved problem? the challenging case of long tail on twitter. CoRR abs/1803.03662 (2018), http://arxiv.org/abs/1803.03662.

 

How to cite us:

```

@article{DBLP:journals/corr/abs-2007-06493,
  author    = {Xuan{-}Son Vu and Thanh Vu and Mai{-}Vu Tran and  Thanh Le{-}Cong and  Huyen T. M. Nguyen},
  title     = {{HSD} Shared Task in {VLSP} Campaign 2019: Hate Speech Detection for Social Good},
  journal   = {CoRR},
  volume    = {abs/2007.06493},
  year      = {2020},
  url       = {https://arxiv.org/abs/2007.06493},
  eprint    = {2007.06493}
}

```

Result submission 

There are 3 classes including: 0=CLEAN, 1=OFFENSIVE_BUT_NOT_HATE, 2=HATE

Participants must submit the result in the same order as the testing set, with the predicted class as the following format: 

  • id1, label_id1

  • id2, label_id1

The results file should be named as results.csv and compressed as results.zip before submitting. We provide a sample for submission with the file name as "05.samples_submission_tbl5_id_2_label.csv" in the dataset tab. 

Evaluation Metric

The submissions will be evaluated based on Macro F1-score evaluation metric for this task. The final ranking of participants is based on evaluation score on the private test data. See F1 Macro HERE.

Evaluation Method:

Participants are required to submit full prediction results as provided in the sample submission file (i.e., "05.samples_submission_tbl5_id_2_label.csv"). Evaluation results of each according phase will be computed internally. In other words, participants submit the same prediction file to different phase to get according results of each phase.

General rules

  • All participants are required to sign VLSP's user agreement form here (download and sign the form HSD User Agreement): https://drive.google.com/drive/u/2/folders/1HSOWdiMFQy0bjd09TBYaI7lsf7iYZaq5
  • Right to cancel, modify, or disqualify. The Competition Organizer reserves the right at its sole discretion to terminate, modify, or suspend the competition.

  • By submitting results to this competition, you consent to the public release of your scores at the Competition workshop and in the associated proceedings, at the task organizers' discretion. Scores may include but are not limited to, automatic and manual quantitative judgments, qualitative judgments, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.

  • By joining the competition, you affirm and acknowledge that you agree to comply with applicable laws and regulations, and you may not infringe upon any copyrights, intellectual property, or patent of another party for the software you develop in the course of the competition, and will not breach of any applicable laws and regulations related to export control and data privacy and protection

  • Prizes are subject to the Competition Organizer’s review and verification of the entrant’s eligibility and compliance with these rules as well as the compliance of the winning submissions with the submission requirements.

  • Participants grant to the Competition Organizer the right to use your winning submissions and the source code and data created for and used to generate the submission for any purpose whatsoever and without further approval.

Eligibility

  • Each participant must create an AIHUB account to submit their solution for the competition. Only one account per user is allowed.

  • The competition is public, but the Competition Organizer may elect to disallow participation according to its own considerations.

  • The Competition Organizer reserves the right to disqualify any entrant from the competition if, in the Competition Organizer’s sole discretion, it reasonably believes that the entrant has attempted to undermine the legitimate operation of the competition through cheating, deception, or other unfair playing practices.

Team

  • Participants are allowed to form teams. The maximum of the number of participants on the team is up to 5. 

  • You may not participate in more than one team. Each team member must be a single individual operating a separate CodaLab account. 

  • Team mergers are allowed and can be performed by the team leader. Team merger requests will not be permitted after the "Team merger deadline".  

  • In order to merge, the combined team must have a total submission count less than or equal to the maximum allowed for a single team as of the merge date. The maximum allowed is the number of submissions per day per phase multiplied by the number of days the competition has been running. 

  • The organizers don’t provide any assistance regarding team mergers.

 

Submission

  • Maximum number of submissions in each phase:

    • Phase 1 - Public Test: 10 submissions / day / team
    • Phase 2 - Private Test: 5 submissions / day / team
  • Submissions are void if they are in whole or part illegible, incomplete, damaged, altered, counterfeit, obtained through fraudulent means, or late. The Competition Organizer reserves the right, in its sole discretion, to disqualify any entrant who makes a submission that does not adhere to all requirements.

Data

By downloading or by accessing the data (images+labels) provided by the Competition Organizer in any manner you agree to the following terms:

  • You will not distribute the labels data except for the purpose of non-commercial and academic-research.

  • You will not distribute, copy, reproduce, disclose, assign, sublicense, embed, host, transfer, sell, trade, or resell the labels of the images provided by the Competition Organizer to any third party for any purpose.

  • The labels must not be used for providing surveillance, analyses or research that isolates a group of individuals or any single individual for any unlawful or discriminatory purpose.

  • You accept full responsibility for your use of the labels and shall defend and indemnify the Competition Organizer, against any and all claims arising from your use of the labels.

No files have been added for this competition yet.

Public test

Start: Jan. 1, 2021, 7 a.m.

Private Test

Start: Jan. 1, 2021, 7 a.m.

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 SamsonPh 0.63
2 hsd2019-organizers 0.18