Recent Updates


Important Dates

* All deadlines are calculated at 11:59 pm
UTC-12 hours

Trial Data Ready Jul 31 (Sat), 2021
Training Data Ready Sep 3 (Fri), 2021
Test Data Ready Dec 3 (Fri), 2021
Evaluation Start Jan 10 (Mon), 2022
Evaluation End Jan 31 (Mon), 2022
System Description Paper Submission Due Feb 23 (Wed), 2022
Notification to Authors Mar 31 (Thu), 2022
Camera-ready Due TBD
Workshop Summer, 2022

1. How to Participate

2. Training Data Format

Click here to download a small set of trial data in English.

We will follow the CoNLL format for the datasets. Here is an example data sample from the trial data.

.

In a data file, samples are separated by blank lines. Each data instance is tokenized and each line contains a single token with the associated label in the last (4th) column. Second and third columns (_) are ignored. Entities are labeled using the BIO scheme. That means, a token tagged as O is not part of an entity, B-X means the token is the first token of an X entity, I-X means the token is in the boundary (but not the first token) of an X type entity having multiple tokens. In the given example, the input text is:

the original ferrari daytona replica driven by don johnson in miami vice

The following image shows the entities as annotated. .

3. Label Space

In this task, we focus on the following six entity types:

  1. PER : Person
  2. LOC : Location
  3. GRP : Group
  4. CORP : Corporation
  5. PROD : Product
  6. CW: Creative Work

4. Evaluation

In this shared task, we provide train/dev/test data for 11 languages. Additionally, we provide dev and test sets for code-mixed language (Find relevant resources in Section 6). As a summary, we provide 11 training files and 12 dev/test files. This codalab competition is in practice phase, where you are allowed to submit prediction file for dev sets. The evaluation framework is divided in three broad tracks.

  1. Multi-lingual (Track 1): In this track, the participants have to train a single multi-lingual NER model using training data for all the languages. This model should be used to generate prediction files for each of the 11 languages’ evaluation (dev/test) set and a code-mixed evaluation set. That means the model should be able to handle monolingual data from any of the languages and code-mixed cases as well.
    Predictions from any mono-lingual model is not allowed in this track. Therefore, please do not submit predictions from mono-lingual models in this track..

  2. Mono-lingual (Tack 2-12): In this track, the participants have to train a model that works for only one language. For each language, there will be one dev/test set that contains examples for that particular language. Participants have to train a mono-lingual model for the language of their interest and use that to create prediction file for the evaluation set of that language.
    Predictions from any multi-lingual model is not allowed in this track.

  3. Code-mixed (Tack 13): This test data contains have code-mixed samples. These samples will include tokens from any of the 11 mentioned languages in the shared task. This is an additional test set apart from the 11 mono-lingual test sets.

5. Submission Instructions

The evaluation script is based on conlleval.pl.

5.1. Format of prediction file

The prediction file should follow CoNLL format but only contain tags. That means, each line contains only the predicted tags of the tokens and sentences are separated by a blank line. Make sure your tags in your prediction file are exactly aligned with the provide dev/test sets. For example,

5.2. Prepare submission files

Follow the below instructions to submit your prediction files for a track:

6. Some Resources for the Beginners in NLP

Communication