Summer Challenge on Automatic Evaluation of Handwritten Answersheets, under NCVPRIPG'24

Challenge Updates

  • Registrations are now closed.
  • Event schedule updated
  • Sample Dataset released
  • Additional Dataset released
  • Phase-1 Evaluation Dataset released
  • Phase-1 Leaderboard Updated
  • Top-5 qualifying teams announced

Challenge gallery

Challenge

Welcome to the Summer Challenge on Auto Evaluation, hosted as part of the National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG'24). Given a set of handwritten answersheets in a predetermined layout, the task is to automatically evaluate them and return the total marks scored by the student based on the correct answers.

AutoEval Task

The task of Auto Evaluation presents a unique challenge. In order to achieve high performance, however, the underlying models need to contend with inconsistencies in spelling, incomplete words, variations in student penmanship (cursive vs print, letter size) etc. Furthermore, differentiating intended answers from markings(scribbles, underlines) adds another layer of complexity.

Performance Measure

The leaderboard for Phase-1 will be based on the following metric:

  • Mean Absolute Error
The Mean Absolute Error is measured by taking the average of the absolute difference between actual values(y) and the model predictions(y hat).
Mean Abs Error

Input and Output specifications

  • Input
    • Handwritten answer sheets in a predetermined layout.
    • Model solution for each True/False question.
  • Output
    • Total marks obtained by the student based on correct answers.

Code to get started

This folder is provided to get started with the challenge. It contains Google Colab Notebooks to read the dataset images, evaluate the performance and other necessary instructions.

Sample Dataset

Sample Dataset description here

The sample dataset provides 10-12 images of handwritten answersheets in a specific layout. The objective is to automatically evaluate the given images and return the marks scored by the candidates.

Evaluation Phase

The challenge is evaluated in two phases :

  • Phase-1

    In Phase 1, teams will submit their final models built on the provided sample data. To evaluate these models, we'll release a set of ~600 handwritten answer sheet images. Teams will test their models on this data and report the results in a specific CSV format (details will be provided separately). Mean Absolute Error (MAE) will determine the best performing models. Only the top 5 teams based on MAE will progress to Phase 2.

  • Phase-2

    The selected teams would now have to create an API/application that automatically evaluates the images of handwritten answersheets and returns the marks scored. Note that, for the evaluation of this stage the ease of accessibility/usability of the API will play a key role.

Leaderboard

Phase-1 Leaderboard (displayed in alphabetical order)

Team
AI Avengers
Cool Dudes
Neural Nexus
Phoenix
Solution Seekers

Events schedule

When: July 18, 2024; 12:00-13:30 IST

Time slot (in IST) Activity Speaker
12:00-12:10 Welcome and Introduction Dr. Anand Mishra
12:10-12:30 Invited Talk:"The Nuances of Assessment Grading: Experience from the Smartail" Mr. Swaminathan Ganesan, Co-Founder and CEO, Smartail pvt ltd.
12:30-12:50 Invited Talk:"Handwritten Essay Scoring on Unconstrained Datasets" Dr. Annapurna Sharma, Applied AI Research Lead, Zensar Technologies
12:50-13:20 Team Presentations ~
13:20-13:30 Winner Announcement and Closing Remarks Dr. Anand Mishra

Events Timeline

Event Deadline
Website for challenge up April 21, 2024
Registrations Open April 21, 2024
Registrations Closes May 1, 2024
Release of Sample Data May 5, 2024
Last date for submitting the trained models and code. June 25, 2024
Evaluation of Phase-1 June 25, 2024
Last date to submit a writeup about the submitted solution June 25, 2024
Leaderboard Updates (for Phase-1) July 1, 2024
Challenge Live Session - Phase-2 July 18-20, 2024 (during conference)
Closing of the competition July 20, 2024

Registration

Registrations are now closed.

Challenge Rules

  • This challenge is open to all (students and professionals).
  • Participants can either register individually or can form a team.
  • Rules for team formation:
    • A team can have a maximum of 4 participants
    • Team members can be from same or different organizations/affiliations
    • A participant can only be a part of a single team
    • Only one member from the team has to register for the challenge
    • One team can only have one registration. Multiple registrations can lead to disqualification.
    • There are no limitations on the number of teams from the same organizations/affiliations (However, one participant can only be part of an unique team)
  • Sample Data download will be permitted only after the team has completed the registration
  • Attending the conference (NCVPRIPG'24) will be highly encouraged. Only attending teams will get the certificates and awards.
  • Only the top-performing teams in Phase -1 will be invited to participate in the final evaluation that will be held during the conference. In case of ties, the organizing committee may rank teams based on the method’s readability of code and easy accessibility of the API/application. The organizing committee’s decision in this regard will be final.
  • This challenge requires all APIs being utilized by the participants to be open-source and free to use. Paid or private APIs are not permitted.

Awards and Recognition

  • Cash prizes(in INR)*:
    • Winner: 5K
    • First Runner-up: 3K
    • Second Runner-up: 2K
  • Free registration to top-5 teams in NCVPRIPG'24
  • Opportunity for summer internship in IITJ
  • Paper writing collaboration
  • Certificate to each participant

(*It is mandatory to attend the NCVPRIPG'24 in person in order to be eligible for the prize and certificates.*)

Contact

For any queries, please contact Arvind @ sharma.126@iitj.ac.in and Uday @ agarwaluday@iitj.ac.in.

Organizers:
Dr. Anand Mishra
Dr. Anand Mishra
mishra@iitj.ac.in

Assistant Professor,
Department of CSE,
IIT Jodhpur

Gyan Prabhat
Gyan Prabhat
prabhat.1@iitj.ac.in

Ph.D. Student

Abhirama Penamakuri
Abhirama Penamakuri
penamakuri.1@iitj.ac.in

PhD Student

Prajwal Gatti
Devesh Sharma
sharma.98@iitj.ac.in

M.Tech Student

Uday Agarwal
Uday Agarwal
agarwaluday@iitj.ac.in

Research Assistant

Shreyas Vaidya
Shreyas Vaidya
vaidya.2@iitj.ac.in

Senior UG Student

Arvind Kumar Sharma
Arvind Kumar Sharma
sharma.126@iitj.ac.in

Senior UG Student