VLSP2021 - Vietnamese Speaker Verification

Organized by sv-vlsp2021-organizers - Current server time: May 24, 2022, 10:12 p.m. UTC

Previous

SV-T2 Private Test
Nov. 6, 2021, 1 a.m. UTC

Current

Public Test
Oct. 3, 2021, 1 a.m. UTC

End

Competition Ends
Nov. 7, 2021, 9 a.m. UTC

Our website

https://vlsp.org.vn/vlsp2021/eval/vsv

Important dates

  • Aug 5, 2021: Registration open

  • Aug 30, 2021: Registration closed

  • Sep 6, 2021: Start of dataset building

  • Sep 20, 2021: End of dataset building

  • Oct 1, 2021: Challenge started (via [aihub.vn](http://aihub.vn))

  • Nov 6, 2021: Private test set release for SV-T1 and SV-T2

  • Nov 7, 2021: Private test results announcement

  • Nov 8, 2020: Announce top 3 teams to submit technical reports

  • Nov 25, 2021: Deadline for top 3 teams to submit technical reports

  • Nov 26, 2021: Result announcement and presentation (workshop day)

Description

VLSP2021 Speaker Verification will feature two evaluation tasks. Teams can participate in one of the tasks or both.

Task-01 (SV-T1): Focusing on the development of SV models with limited data. For this task, the organizer will provide a training set with over 1000 speaker identities. Participants can only use this dataset for model development. Any use of additional data for model training is prohibited.

Task-02 (SV-T2): Focusing on testing the robustness of SV systems. For this task, participants can use the released training set and any other data.

Public pre-trained models may be used for system training and development in both tasks and must be specified and shared with other teams. Non-speech audio and data (e.g., noise samples, impulse responses, ...) may be used and should be noted in the technical report.

Final standings for both tasks will be decided based on private test results on Nov 7, 2021.

Contact Us

Please feel free to contact us if you have any questions via [email protected] or publicly at https://groups.google.com/g/svvlsp2021.

Evaluation data

Private evaluation sets will be made available for the two tasks SV-T1 and SV-T2. The SV-T1 test set is a combination of in-domain speakers and out-domain speakers. Train speakers and test speakers are mutually exclusive.

In evaluation sets, each record is a single line containing two fields separated by a tab character and in the following format:

enrollment_wav<COMMA>test_wav<NEWLINE>

**where

enrollment_wav - The enrollment utterance
test_wav - The test utterance

Example evaluation set:

file1.wav,file2.wav
file1.wav,file3.wav
file1.wav,file4.wav
...

Evaluation metric

The performance of the models will be evaluated by the Equal Error Rate (EER) where the False Acceptance Rate (FAR) equals the False Rejection Rate (FRR).

Submission Guidelines

Multiple submissions are allowed, the evaluation result is based on the submission having the lowest EER.

The submission file is composed of a header and a set of testing pairs and a cosine similarity output by the system for the pair. The order of the pairs in the submission file must follow the same order as the pair list. A single line must contain 3 fields separated by tab character in the following format:

enrollment_wav<COMMA>test_wav<COMMA>score<NEWLINE>

where

enrollment_wav - The enrollment utterance
test_wav - The test utterance
score - The cosine similarity

For example:

file1.wav,file2.wav,0.81285
file1.wav,file3.wav,0.01029
file1.wav,file4.wav,0.45792
...

General rules

  • Right to cancel, modify, or disqualify. The Competition Organizer reserves the right at its sole discretion to terminate, modify, or suspend the competition.

  • By submitting results to this competition, you consent to the public release of your scores at the Competition workshop and in the associated proceedings, at the task organizers' discretion. Scores may include but are not limited to, automatic and manual quantitative judgments, qualitative judgments, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.

  • By joining the competition, you accepted to the terms and conditions of Agreement form of VLSP 2021 - Vietnamese Speaker Verification, which has been sent to your email. It is noted that your participant rights will be revoked if you do not sign and send back to us before the deadline.
  • By joining the competition, you affirm and acknowledge that you agree to comply with applicable laws and regulations, and you may not infringe upon any copyrights, intellectual property, or patent of another party for the software you develop in the course of the competition, and will not breach of any applicable laws and regulations related to export control and data privacy and protection.

  • Prizes are subject to the Competition Organizer’s review and verification of the entrant’s eligibility and compliance with these rules as well as the compliance of the winning submissions with the submission requirements.

  • Participants grant to the Competition Organizer the right to use your winning submissions and the source code and data created for and used to generate the submission for any purpose whatsoever and without further approval.

Eligibility

  • Each participant must create a CodaLab account to submit their solution for the competition. Only one account per user is allowed.

  • The competition is public, but the Competition Organizer may elect to disallow participation according to its own considerations.

  • The Competition Organizer reserves the right to disqualify any entrant from the competition if, in the Competition Organizer’s sole discretion, it reasonably believes that the entrant has attempted to undermine the legitimate operation of the competition through cheating, deception, or other unfair playing practices.

Team

  • Participants are allowed to form teams. The maximum of the number of participants on the team is up to 5. 

  • You may not participate in more than one team. Each team member must be a single individual operating a separate CodaLab account. 

Submission

  • Maximum number of submissions in each phase:

    • Phase 1 - Public Test: 10 submissions / day / team
    • Phase 2 - SV-T1 Private Test: 3 submissions / day / team, total 6 submissions
    • Phase 3 - SV-T2 Private Test: 3 submissions / day / team, total 6 submissions
  • Submissions are void if they are in whole or part illegible, incomplete, damaged, altered, counterfeit, obtained through fraudulent means, or late. The Competition Organizer reserves the right, in its sole discretion, to disqualify any entrant who makes a submission that does not adhere to all requirements.

Data

By downloading or by accessing the data provided by the Competition Organizer in any manner you agree to the following terms:

  • You will not distribute the data except for the purpose of non-commercial and academic-research.

  • You will not distribute, copy, reproduce, disclose, assign, sublicense, embed, host, transfer, sell, trade, or resell any portion of the data provided by the Competition Organizer to any third party for any purpose.

  • The data must not be used for providing surveillance, analyses or research that isolates a group of individuals or any single individual for any unlawful or discriminatory purpose.

  • You accept full responsibility for your use of the data and shall defend and indemnify the Competition Organizer, against any and all claims arising from your use of the data.

Public Test

Start: Oct. 3, 2021, 8 a.m.

SV-T1 Private Test

Start: Nov. 6, 2021, 8 a.m.

SV-T2 Private Test

Start: Nov. 6, 2021, 8 a.m.

Competition Ends

Nov. 7, 2021, 4 p.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 smartcall-its 1.220
2 meoconxinhxan 3.010
3 anbn14 3.300