O-COCOSDA and VLSP 2022 - Indic MSV Shared task - I-MSV

Organized by cocosda-msv-organizer - Current server time: April 20, 2024, 4:27 a.m. UTC

Previous

Unconstrained SV
Oct. 15, 2022, 2 a.m. UTC

Current

Public Test
Sept. 15, 2022, midnight UTC

End

Competition Ends
Oct. 16, 2022, 5 a.m. UTC

Our website

https://vlsp.org.vn/cocosda2022/i-msv

Important dates

  • August 12h, 2022: Challenge registration open.

  • September 7th, 2022: Release of training and developmental data

  • September 15th, 2022: Release of public test data

  • October 10th, 2022 : Release of private test data

  • October 15th, 2022 : Submission of scores

  • October 20th, 2022: Announcing the Top 3 to be presented at the conference

  • November 15th, 2022: Technical report submission

  • November 26th, 2022: Announcing the ranking winners.

Description

The I-MLSV challenge consists of two tracks namely, Track 1 (Constrained SV) and Track 2 (Unconstrained SV). The evaluation rule for both the tracks is as follows:

Submissions other than the defined tasks will not be included in this challenge.

Constrained SV: Participants are not allowed to use speech data other than the speech data released as a part of the constrained SV challenge for the development of the SV system.

Unconstrained SV: Participants are free to use any publicly available speech data in addition to the audio data released as a part of unconstrained SV.

Metric for evaluation: Equal Error Rate (EER) will be used as the metric for performance evaluation for the defined test scenarios.

Participating teams need to share their final SV systems, along with a write-up in o-cocosda format (https://vlsp.org.vn/cocosda2022/paper-submission), which should give a brief description about.

  • The database used with appropriate citations.

  • Brief description of the methods used to build the system.

  • Github link with proper code structure and details.

Contact Us

Please feel free to contact us if you have any questions via [email protected].

Evaluation data

Enrolment data: The Enrolment data consists of utterances from the English language captured in multiple sessions using only a headphone as the sensor.

Public test data: Public test data will be provided for two conditions namely, matched and mismatched test conditions.

  • Matched test condition: The language and sensor used for test data collection have remained the same as the enrolment data.

  • Mismatched test condition: The language and sensor used for test data collection differ from the enrolment data.

Private test data: In private test data, the test utterances are collected using different languages and five sensors, including the sensor used for collecting the enrollment data. The duration of test data varies from 10 to 60 sec.

Evaluation metric

The performance of the models will be evaluated by the Equal Error Rate (EER) where the False Acceptance Rate (FAR) equals the False Rejection Rate (FRR).

Submission Guidelines

Multiple submissions are allowed but under a limitation of each phase, the evaluation result is based on the submission having the lowest EER.

The submission file comprises a header, a set of testing pairs, and a cosine similarity output by the system for the pair. The order of the pairs in the submission file must follow the same order as the pair list. A single line must contain 3 fields separated by COMMA character in the following format:

utterance_id<COMMA>speaker_id<COMMA>score<NEWLINE>

where

utterance_id - The test speech
score - The similarity score is in the range of 0 to 1

For example:

utterance_id,speaker_id,score

file1.wav,1001,0.81285
file1.wav,1002,0.01029
...

After creating rename the file as results.csv and then zip it and upload it in the submission menu.

General rules

  • Right to cancel, modify, or disqualify. The Competition Organizer reserves the right at its sole discretion to terminate, modify, or suspend the competition.

  • By submitting results to this competition, you consent to the public release of your scores at the Competition workshop and in the associated proceedings, at the task organizers' discretion. Scores may include but are not limited to, automatic and manual quantitative judgments, qualitative judgments, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.

  • By joining the competition, you accepted to the terms and conditions of Terms of Participation and Data Use Agreement of O-COCOSDA and VLSP 2022 - MSV Shared task, which has been sent to your email.
  • By joining the competition, you affirm and acknowledge that you agree to comply with applicable laws and regulations, and you may not infringe upon any copyrights, intellectual property, or patent of another party for the software you develop in the course of the competition, and will not breach of any applicable laws and regulations related to export control and data privacy and protection.

  • Prizes are subject to the Competition Organizer’s review and verification of the entrant’s eligibility and compliance with these rules as well as the compliance of the winning submissions with the submission requirements.

  • Participants grant to the Competition Organizer the right to use your winning submissions and the source code and data created for and used to generate the submission for any purpose whatsoever and without further approval.

Eligibility

  • Each participant must create a AIHub account to submit their solution for the competition. Only one account per user is allowed.

  • The competition is public, but the Competition Organizer may elect to disallow participation according to its own considerations.

  • The Competition Organizer reserves the right to disqualify any entrant from the competition if, in the Competition Organizer’s sole discretion, it reasonably believes that the entrant has attempted to undermine the legitimate operation of the competition through cheating, deception, or other unfair playing practices.

Team

  • Participants are allowed to form teams. 

  • You may not participate in more than one team. Each team member must be a single individual operating a separate AIHub account. 

Submission

  • Maximum number of submissions in each phase:

    • Public Test: 20 submissions / day / team
    • Private Test: 3 submissions in total
  • Submissions are void if they are in whole or part illegible, incomplete, damaged, altered, counterfeit, obtained through fraudulent means, or late. The Competition Organizer reserves the right, in its sole discretion, to disqualify any entrant who makes a submission that does not adhere to all requirements.

Data

By downloading or by accessing the data provided by the Competition Organizer in any manner you agree to the following terms:

  • You will not distribute the data except for the purpose of non-commercial and academic-research.

  • You will not distribute, copy, reproduce, disclose, assign, sublicense, embed, host, transfer, sell, trade, or resell any portion of the data provided by the Competition Organizer to any third party for any purpose.

  • The data must not be used for providing surveillance, analyses or research that isolates a group of individuals or any single individual for any unlawful or discriminatory purpose.

  • You accept full responsibility for your use of the data and shall defend and indemnify the Competition Organizer, against any and all claims arising from your use of the data.

Public Test

Start: Sept. 15, 2022, midnight

Constrained SV

Start: Oct. 15, 2022, 2 a.m.

Unconstrained SV

Start: Oct. 15, 2022, 2 a.m.

Competition Ends

Oct. 16, 2022, 5 a.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 qtteam 0.000
2 HelloWorld 0.223
3 tenpm__ 0.550