VLSP2023 ComOM shared task

Organized by catcd - Current server time: April 27, 2024, 6:14 p.m. UTC

Previous

[Post Challenge] Private Test
Nov. 4, 2023, midnight UTC

Current

[Post Challenge] Private Test
Nov. 4, 2023, midnight UTC

End

Competition Ends
Nov. 3, 2023, 11:59 p.m. UTC

Our website

https://vlsp.org.vn/vlsp2023/eval/comon

Important dates

  • Aug 15, 2023: Registration open 
  • Sep 15, 2023: Release training data  
  • Oct 15, 2023: Release public test & evaluation system 
  • Nov 01, 2023: Release private test & registration close
  • Nov 03, 2023: Submission close 
  • Nov 06, 2023: Private test result announcement 
  • Nov 26, 2023: Technical report submission
  • Dec 15-16, 2023: Workshop days

Note: All deadlines are 11:59PM UTC-00:00 (~ 6:59AM next day Indochina Time (ICT), UTC+07:00).

Task description

The rapid growth of online shopping and e-commerce platforms has led to an explosion of product reviews. These reviews often contain valuable information about users’ opinions on various aspects of the products, including comparisons between different devices. Understanding comparative opinions from product reviews is crucial for manufacturers and consumers alike. Manufacturers can gain insights into the strengths and weaknesses of their products compared to competitors, while consumers can make more informed purchasing decisions based on these comparative insights. To facilitate this process, we propose the “ComOM - Comparative Opinion Mining from Vietnamese Product Reviews” shared task.

The goal of this shared task is to develop natural language processing models that can extract comparative opinions from product reviews. Each review contains comparative sentences expressing opinions on different aspects, comparing them in various ways. Participants are required to develop models that can extract the following information, referred to as a “quintuple,” from comparative sentences:

  1. Subject: The entity that is the subject of the comparison (e.g., a particular product model).
  2. Object: The entity being compared to the subject (e.g., another model or a general reference).
  3. Aspect: The word or phrase about the feature or attribute of the subject and object that is being compared (e.g., battery life, camera quality, performance).
  4. Predicate: The comparative word or phrase expressing the comparison (e.g., “better than,” “worse than,” “equal to”).
  5. Comparison Type Label: This label indicates the type of comparison made and can be one of the following categories: ranked comparison (e.g., “better”, “worse”), superlative comparison (e.g., “best”, “worst”), equal comparison (e.g., “same as,” “as good as”), and non-gradable comparison (e.g., “different from,” “unlike”).

Registration and contact

To contact us, mail to: [email protected] or [email protected].

Evaluation Methodology

To assess Comparative Element Extraction (CEE), we employ various evaluation metrics, including Precision, Recall, and F1 score for each element (subject, object, aspect, predicate, and comparison type label). Additionally, we calculate Micro- and Macro-averages of these scores.

In the Tuple Evaluation (TE), the entire quintuple is considered, and we measure Precision, Recall, and F1 score for the entire quintuple.

Metrics Naming Convention

Each metric follows a format of four elements:

{Matching Strategy}-{Level of Evaluation}-{Indication}-{Metric}

1. Matching Strategy

There are three matching strategies (E, P, B)

  • E - Exact Match: The entire extracted quintuple component must precisely match the ground truth.
  • P - Proportional Match: The proportion of matched words in the extracted component with respect to the ground truth is considered.
  • B - Binary Match: At least one word in the extracted component must overlap with the ground truth.

All of these strategies will be utilized for the evaluation of Comparative Element Extraction (CEE), while only Exact Match and Binary Match will be applied to Tuple Evaluation (TE).

2. Level of Evaluation

There are three levels of evaluation (CEE, T4, T5)

  • CEE - Comparative Element Extraction: Involves extracting individual elements such as subject, object, aspect, and predicate.
  • T4 - Tuple of four: Requires a match for all four elements.
  • T5 - Tuple of five: Requires a match for all four elements and the comparative label.

3. Indication

Indicates which element or comparison type is being evaluated.

For CEE, there are six types of indications (S, O, A, P, Micro, Macro):

  • S, O, A, P: Indicating subject, object, aspect, and predicate, respectively.
  • Micro and Macro: Averaged score over these four types of elements.

For T4, no indication is used; there is only one type of T4.

For T5, there are eight types of comparison (EQL, DIF, COM, COM+, COM-, SUP, SUP+, SUP-) and two types of averages (Micro, Macro):

  • DIF: Different comparison
  • EQL: Equal comparison (no significant difference)
  • SUP+: Positive superlatives
  • SUP-: Negative superlatives
  • SUP: Superlatives that do not specify positivity or negativity
  • COM+: Positive comparison
  • COM-: Negative comparison
  • COM: Comparison that does not specify positivity or negativity
  • Micro and Macro: Averaged score over these eight types of comparisons.

4. Metric

Three metrics are used: Precision, Recall, and F1 score (P, R, F1).

Scoring System

In total, 120 metrics are evaluated, with only 15 selected to appear in the leaderboard. For a comprehensive view of all metrics, please download the scoring details from your submission "Download output from scoring step".

The final ranking score is determined by the metric E-T5-MACRO-F1, representing Exact Match for Tuple of Five, Macro-averaged F1 score.

General rules

  • Right to cancel, modify, or disqualify. The Competition Organizer reserves the right at its sole discretion to terminate, modify, or suspend the competition.

  • By submitting results to this competition, you consent to the public release of your scores at the Competition workshop and in the associated proceedings, at the task organizers' discretion. Scores may include but are not limited to, automatic and manual quantitative judgments, qualitative judgments, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.

  • By joining the competition, you accepted to the terms and conditions of Terms of Participation and Data Usage Agreement of VLSP 2023 - ComOM Shared task, which has been sent to your email.
  • By joining the competition, you affirm and acknowledge that you agree to comply with applicable laws and regulations, and you may not infringe upon any copyrights, intellectual property, or patent of another party for the software you develop in the course of the competition, and will not breach of any applicable laws and regulations related to export control and data privacy and protection.

  • Prizes are subject to the Competition Organizer’s review and verification of the entrant’s eligibility and compliance with these rules as well as the compliance of the winning submissions with the submission requirements.

  • Participants grant to the Competition Organizer the right to use your winning submissions and the source code and data created for and used to generate the submission for any purpose whatsoever and without further approval.

Eligibility

  • Each participant must create a AIHub account to submit their solution for the competition. Only one account per user is allowed.

  • The competition is public, but the Competition Organizer may elect to disallow participation according to its own considerations.

  • The Competition Organizer reserves the right to disqualify any entrant from the competition if, in the Competition Organizer’s sole discretion, it reasonably believes that the entrant has attempted to undermine the legitimate operation of the competition through cheating, deception, or other unfair playing practices.

Team

  • Participants are allowed to form teams. 

  • You may not participate in more than one team. Each team member must be a single individual operating a separate AIHub account.

  • Only one AIHub account per team is approved to submit results.

Public Test

Start: Oct. 15, 2023, midnight

Private Test

Start: Nov. 1, 2023, midnight

[Post Challenge] Private Test

Start: Nov. 4, 2023, midnight

Competition Ends

Nov. 3, 2023, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 pthutrang513 0.3384
2 ThuyNT03 0.2578
3 thindang 0.2373