Important Dates

* All deadlines are calculated at 11:59 pm
UTC-12 hours

Trial Data Ready Jul 15 (Fri), 2022
Training Data Ready Sep 30 (Fri), 2022
Evaluation Start Jan 10 (Tue), 2023
Evaluation End Jan 31 (Tue), 2023
System Description Paper Submission Due Feb 1 (Wed), 2023
Notification to Authors Mar 1 (Wed), 2023
Camera-ready Due Apr 1 (Sat), 2023
Workshop TBD

FAQs

1. When the test data will be available?

The evaluation phase will start on January, 2023. Test data will be made available in Codalab prior the start period.

2. How can I participate in the test phase?

A new Codalab submission site will be available before the evaluation phase. We will notify every participants from the practice phase with the link to the submission site for the test phase. Test predictions should be submitted to that site. For each track maximum 6 submissions are allowed. The best result will be used.

3. How large will be the test data?

The test data for each language will have at least 150K+ instances and for some languages there are approximately 500K+ instances. As a result, generating predictions can take longer.

4. How the final ranking will be determined? Will it consider per-domain F1?

Per-domain F1 will be shown in output JSON file, but final ranking will be determined based on overall macro-F1. However, the evaluation result will always display detailed result for each coarse and fine-grained classes.

5. Will I be able to see my results and the leaderboard during the test phase?

You can see the results on your submissions, but the leaderboard will not be available until after the competition.

6. Can we use an ensemble-based approach?

Yes, you can use any model for the task.

7. What are the domains of the test data?

The test data domains are similar as the development data. Looking at the development set will give an idea on the domains.