VLSP 2023 - VSASV Shared task

Organized by cocosda-msv-organizer - Current server time: April 28, 2024, 1:26 p.m. UTC

Previous

VSASV Private
Nov. 17, 2023, 5 a.m. UTC

Current

VSASV Public
Oct. 19, 2023, 9 a.m. UTC

End

Competition Ends
Nov. 18, 2023, 5 a.m. UTC

Our website

https://vlsp.org.vn/vlsp2023/eval/vsasv

Important dates

  • Aug 14, 2023: Registration open.

  • Sept 14, 2023: Registration close.

  • Oct 3, 2023: Training dataset release.

  • Oct 20, 2023: Public test set release (maximum of 20 submissions per day).

  • Nov 17, 2023: Private test set release (maximum of 6 submissions).

  • Nov 17, 2023: Test result submission.

  • Nov 26, 2023: Technical report submission.

  • Dec 15-16, 2023: Result announcement - Workshop days.

Description

Speaker verification (SV) is the task of verifying whether an input utterance matches the claimed identity. The success of speaker verification systems heavily depends on large training datasets collected under real-world conditions. While common languages like English or Chinese have vastly available datasets, low-resource ones like Vietnamese remain limited. With the aim to leverage the development of Vietnamese speaker verification, the Vietnamese Spoofing-Aware Speaker Verification Challenge (VSASV) 2023 has been designed for understanding and comparing research SV techniques on Vietnam-Celeb, a large-scale dataset for Vietnamese speaker recognition.

This is the first spoofing-aware speaker verification challenge for Vietnamese. While the evaluation metric is the same as speaker verification - Equal Error Rate - we introduce spoofed negative samples created by synthesizing speech from target speakers or recording speech from different devices. By doing this, we encourage participants to develop SV systems that are jointly optimized for spoofing detection and speaker verification.

Basic Regulations:

Any use of external speaker data and pre-trained models is PROHIBITED, even for pre-trained models on other tasks, e.g. speech recognition, text-to-speech, speech enhancement, voice activity detection...

Participants can use non-speech data (noise samples, impulse responses…) for augmenting and must specify and share with other teams.

Participants can create spoofed samples only with the provided data.

Participants can use data augmentation techniques only with the provided data from the organizer.

The challenge has a public and a private test set. The final standings for the task will be decided based on private test results. Teams may be required to provide source code to examine the final results.

Contact Us

Please feel free to contact us if you have any questions via [email protected].

Evaluation data

This task has a public and private test sets. Final standings for the task will be decided based on private test results.

Detailed information about each set:

  • Public test: the public test contains both bona fide (real speech) and spoofed negative samples. The negative pairs are chosen randomly.

  • Private test: the private test set contains both bona fide and spoofed negative samples. The bona fide negative pairs are chosen such that the speakers have the same gender and dialect.

Evaluation metric

The performance of the models will be evaluated by the Equal Error Rate (EER) where the False Acceptance Rate (FAR) equals the False Rejection Rate (FRR).

Submission Guidelines

Multiple submissions are allowed but under a limitation of each phase, the evaluation result is based on the submission having the lowest EER.

The submission file comprises a header, a set of testing pairs, and a cosine similarity output by the system for the pair. The order of the pairs in the submission file must follow the same order as the pair list. A single line must contain 3 fields separated by tab character in the following format:

test_wav<TAB>enrollment_wav <TAB>score<NEWLINE>

where

enrollment_wav - The enrollment utterance
test_wav - The test utterance
score - The similarity score

General rules

  • Right to cancel, modify, or disqualify. The Competition Organizer reserves the right at its sole discretion to terminate, modify, or suspend the competition.

  • By submitting results to this competition, you consent to the public release of your scores at the Competition workshop and in the associated proceedings, at the task organizers' discretion. Scores may include but are not limited to, automatic and manual quantitative judgments, qualitative judgments, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.

  • By joining the competition, you accepted to the terms and conditions of Terms of Participation and Data Use Agreement of VLSP 2023 - VSASV Shared task, which has been sent to your email.
  • By joining the competition, you affirm and acknowledge that you agree to comply with applicable laws and regulations, and you may not infringe upon any copyrights, intellectual property, or patent of another party for the software you develop in the course of the competition, and will not breach of any applicable laws and regulations related to export control and data privacy and protection.

  • Prizes are subject to the Competition Organizer’s review and verification of the entrant’s eligibility and compliance with these rules as well as the compliance of the winning submissions with the submission requirements.

  • Participants grant to the Competition Organizer the right to use your winning submissions and the source code and data created for and used to generate the submission for any purpose whatsoever and without further approval.

Eligibility

  • Each participant must create a AIHub account to submit their solution for the competition. Only one account per user is allowed.

  • The competition is public, but the Competition Organizer may elect to disallow participation according to its own considerations.

  • The Competition Organizer reserves the right to disqualify any entrant from the competition if, in the Competition Organizer’s sole discretion, it reasonably believes that the entrant has attempted to undermine the legitimate operation of the competition through cheating, deception, or other unfair playing practices.

Team

  • Participants are allowed to form teams. 

  • You may not participate in more than one team. Each team member must be a single individual operating a separate AIHub account. 

Submission

  • Maximum number of submissions in each phase:

    • Public Test: 20 submissions / day / team
    • Private Test: 6 submissions in total
  • Submissions are void if they are in whole or part illegible, incomplete, damaged, altered, counterfeit, obtained through fraudulent means, or late. The Competition Organizer reserves the right, in its sole discretion, to disqualify any entrant who makes a submission that does not adhere to all requirements.

Data

By downloading or by accessing the data provided by the Competition Organizer in any manner you agree to the following terms:

  • You will not distribute the data except for the purpose of non-commercial and academic-research.

  • You will not distribute, copy, reproduce, disclose, assign, sublicense, embed, host, transfer, sell, trade, or resell any portion of the data provided by the Competition Organizer to any third party for any purpose.

  • The data must not be used for providing surveillance, analyses or research that isolates a group of individuals or any single individual for any unlawful or discriminatory purpose.

  • You accept full responsibility for your use of the data and shall defend and indemnify the Competition Organizer, against any and all claims arising from your use of the data.

VSASV Public

Start: Oct. 19, 2023, 9 a.m.

VSASV Private

Start: Nov. 17, 2023, 5 a.m.

Competition Ends

Nov. 18, 2023, 5 a.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 leminhtritue_tbq 2.60
2 SpoofySV 2.86
3 unknown 3.15