Subjective Match on HackerEarth Assessments: Make Technical Screening Smarter
In tech or coding assessments, subjective questions are open-ended questions that require the candidate to provide a more detailed or nuanced response than a simple yes or no answer. These questions are often used to assess the candidate’s understanding of a particular concept, their ability to think critically, and their problem-solving skills.
Let’s be honest — subjective questions are an integral part of the technical screening process, but they are really hard to evaluate. There is no standardized format or set of guidelines for subjective questions in tech or coding assessments. This can make it difficult for recruiters to compare responses across different candidates and assessments.
Evaluating subjective questions requires a significant amount of effort. Recruiters need to carefully read and analyze each response, which can be time-intensive, especially when they have to evaluate a large number of candidates.
Delays in evaluation creates a domino effect — delaying all further processes and throwing the time-to-hire metric into a tizzy! Candidates don’t get timely updates about their interview status, which also impacts the candidate experience your recruiting team is trying to maintain.
The good news is, you can avoid this chaos. Thanks to HackerEarth’s newly introduced Subjective Match feature.
Enter: Subjective Match, a smarter evaluation method for assessments
There are three evaluation methods by which you can evaluate subjective questions:
Method #1: AI evaluation
Our AI evaluation method (earlier known as the auto-evaluation method) uses ChatGPT and HackerEarth’s proprietary AI models to evaluate a candidate’s answers automatically. The prerequisite is that recruiting teams need to provide a base answer before sending the tests to candidates. HackerEarth’s AI will compare this base answer to the candidate’s submission and evaluate its accuracy.
There is also an option to compare the expected answer and the one answered by the candidate. For this, you can simply enable the View Difference option.
Here’s an example of how our AI evaluates the differences between the expected answer for a question, and the candidate’s version.
The above screengrab shows sentences highlighted in red which have not been included by the candidate in their answer when compared to the expected answer.
This evaluation method is best-suited for long, text-based answers and we recommend that you do not use it for numerical strings.
Method #2: Keyword evaluation
The keyword evaluation method lets admins define the specific keywords that should appear in the answer. If the candidate’s submission includes the exact keyword, they’ll be scored accordingly.
Things you need to know while using the keyword evaluation method:
- The maximum length of keywords should be 30 characters.
- At least 1 keyword should be present to execute the evaluation process
- The maximum limit for the keywords is 15.
- At least one keyword score option must be equal to the maximum score of the question.
Here’s the criteria to allocate the keyword score:
- Organize the keyword options in descending order based on their scores.
- Verify whether the keyword is present in the candidate’s response at least once using AI.
- Allocate associated score as the question’s score when the keyword is found.
- Repeat these steps for the next high-scoring keyword that the admin has set up if the keyword is not found.
Note: The verification done here is case insensitive.
This evaluation method is especially useful for evaluating questions related to data analytics (MS-Excel), mathematical numerical, or fill-in-the-blank questions.
For example, while working on a data set, the conclusion or outcome could be different and you can have a different score for each conclusion.
Like, in the image below, if the output is 14, the candidate will get a 100% score. If the output is 9 or anything else around this number, the candidate will get a 80% score. For any other output besides the one listed below, the candidate will get zero as the score.
Method #3: Manual evaluation
If you’d rather skip the AI and use your personal judgment to evaluate candidate submissions, then we have made that option available to you as well! You can manually check the candidate’s submission with the base answer you added when you were setting up the assessment.
Note: The base answer will also be present in the candidate’s report to make the comparison easier.
Witness a smoother evaluation experience with Subjective Evaluation
For recruiters and hiring managers, our Subjective Evaluation feature will change the way you evaluate candidate submissions.
Not only will it make the screening process seamless but also reduce the time and effort in conducting the manual checks for each submission efficiently. And, if you have only tried out our AI method yet, we recommend that you explore the keyword evaluation method, too, and check the difference.
Until next time, happy hiring!
Get advanced recruiting insights delivered every month
Get advanced recruiting insights delivered every month
Get insightful articles from the world of tech recruiting straight to your inbox
Hiring is hard — no doubt. Identifying the top technical skills that you should hire for is even harder. But we’ve got your…
In the fast-paced tech world, finding the right talent is paramount. For years, HackerEarth has empowered tech recruiters to identify top talent through…
This year, as the industry went through a hiring freeze, we at HackerEarth took the time to elevate our product lineup so that…
In today’s dynamic business landscape, organizations are constantly seeking ways to optimize their talent acquisition strategies to attract and retain top performers. The…
System design interviews are becoming increasingly popular, and important, as the digital systems we work with become more complex. The term ‘system’ here…
In the dynamic world of technical recruiting, where skills testing forms the cornerstone of the selection process, we have a potent tool waiting…