Find out how HackerEarth can boost your tech recruiting

Learn more
piller_image

Subjective Match on HackerEarth Assessments: Make Technical Screening Smarter

""

In tech or coding assessments, subjective questions are open-ended questions that require the candidate to provide a more detailed or nuanced response than a simple yes or no answer. These questions are often used to assess the candidate’s understanding of a particular concept, their ability to think critically, and their problem-solving skills.

Let’s be honest — subjective questions are an integral part of the technical screening process, but they are really hard to evaluate. There is no standardized format or set of guidelines for subjective questions in tech or coding assessments. This can make it difficult for recruiters to compare responses across different candidates and assessments.

Evaluating subjective questions requires a significant amount of effort. Recruiters need to carefully read and analyze each response, which can be time-intensive, especially when they have to evaluate a large number of candidates.

Delays in evaluation creates a domino effect — delaying all further processes and throwing the time-to-hire metric into a tizzy! Candidates don’t get timely updates about their interview status, which also impacts the candidate experience your recruiting team is trying to maintain.

The good news is, you can avoid this chaos. Thanks to HackerEarth’s newly introduced Subjective Match feature.

Enter: Subjective Match, a smarter evaluation method for assessments

There are three evaluation methods by which you can evaluate subjective questions:

Method #1: AI evaluation

Our AI evaluation method (earlier known as the auto-evaluation method) uses ChatGPT and HackerEarth’s proprietary AI models to evaluate a candidate’s answers automatically. The prerequisite is that recruiting teams need to provide a base answer before sending the tests to candidates. HackerEarth’s AI will compare this base answer to the candidate’s submission and evaluate its accuracy.

There is also an option to compare the expected answer and the one answered by the candidate. For this, you can simply enable the View Difference option.

Here’s an example of how our AI evaluates the differences between the expected answer for a question, and the candidate’s version.

This is how HackerEarth Assessment's Subjective Match feature uses AI evaluation

The above screengrab shows sentences highlighted in red which have not been included by the candidate in their answer when compared to the expected answer.

This evaluation method is best-suited for long, text-based answers and we recommend that you do not use it for numerical strings.


Also read: 4 Ways HackerEarth Flag the Use of ChatGPT in Hiring Assessments


Method #2: Keyword evaluation

The keyword evaluation method lets admins define the specific keywords that should appear in the answer. If the candidate’s submission includes the exact keyword, they’ll be scored accordingly.

Things you need to know while using the keyword evaluation method:

  • The maximum length of keywords should be 30 characters.
  • At least 1 keyword should be present to execute the evaluation process
  • The maximum limit for the keywords is 15.
  • At least one keyword score option must be equal to the maximum score of the question.

Here’s the criteria to allocate the keyword score:

  • Organize the keyword options in descending order based on their scores.
  • Verify whether the keyword is present in the candidate’s response at least once using AI.
  • Allocate associated score as the question’s score when the keyword is found.
  • Repeat these steps for the next high-scoring keyword that the admin has set up if the keyword is not found.

Note: The verification done here is case insensitive.

This evaluation method is especially useful for evaluating questions related to data analytics (MS-Excel), mathematical numerical, or fill-in-the-blank questions.

You can use this for process roles like BA, data scientist, financial analyst, market analyst and business analyst where the outcome could be many and each outcome has a different impact.

For example, while working on a data set, the conclusion or outcome could be different and you can have a different score for each conclusion.

Like, in the image below, if the output is 14, the candidate will get a 100% score. If the output is 9 or anything else around this number, the candidate will get a 80% score. For any other output besides the one listed below, the candidate will get zero as the score.

This is how keyword evaluation is done by HackerEarth Assessments for the answers submitted by candidate

Method #3: Manual evaluation

If you’d rather skip the AI and use your personal judgment to evaluate candidate submissions, then we have made that option available to you as well! You can manually check the candidate’s submission with the base answer you added when you were setting up the assessment.

Note: The base answer will also be present in the candidate’s report to make the comparison easier.

Witness a smoother evaluation experience with Subjective Evaluation

For recruiters and hiring managers, our Subjective Evaluation feature will change the way you evaluate candidate submissions.

Not only will it make the screening process seamless but also reduce the time and effort in conducting the manual checks for each submission efficiently. And, if you have only tried out our AI method yet, we recommend that you explore the keyword evaluation method, too, and check the difference.

Until next time, happy hiring!

Hackerearth Subscribe

Get advanced recruiting insights delivered every month

Related reads

8 Top Tech Skills to Hire For in 2024
8 Top Tech Skills to Hire For in 2024

8 Top Tech Skills to Hire For in 2024

Hiring is hard — no doubt. Identifying the top technical skills that you should hire for is even harder. But we’ve got your…

How HackerEarth and Olibr are Reshaping Tech Talent Discovery
How HackerEarth and Olibr are Reshaping Tech Talent Discovery

How HackerEarth and Olibr are Reshaping Tech Talent Discovery

In the fast-paced tech world, finding the right talent is paramount. For years, HackerEarth has empowered tech recruiters to identify top talent through…

7 New HackerEarth Assessments Product Updates in 2023 You Should Know About
7 New HackerEarth Assessments Product Updates in 2023 You Should Know About

7 New HackerEarth Assessments Product Updates in 2023 You Should Know About

This year, as the industry went through a hiring freeze, we at HackerEarth took the time to elevate our product lineup so that…

AI Recruiting Software: Revolutionizing the Hiring Process
AI Recruiting Software: Revolutionizing the Hiring Process

AI Recruiting Software: Revolutionizing the Hiring Process

In today’s dynamic business landscape, organizations are constantly seeking ways to optimize their talent acquisition strategies to attract and retain top performers. The…

A Detailed Guide on Conducting Effective System Design Interviews
A Detailed Guide on Conducting Effective System Design Interviews

A Detailed Guide on Conducting Effective System Design Interviews

System design interviews are becoming increasingly popular, and important, as the digital systems we work with become more complex. The term ‘system’ here…

Bridging the Bias Gap: Enhancing Technical Recruiting with Psychometric Assessment
Bridging the Bias Gap: Enhancing Technical Recruiting with Psychometric Assessment

Bridging the Bias Gap: Enhancing Technical Recruiting with Psychometric Assessment

In the dynamic world of technical recruiting, where skills testing forms the cornerstone of the selection process, we have a potent tool waiting…

Hackerearth Subscribe

Get advanced recruiting insights delivered every month

View More

Top Products

Hackathons

Engage global developers through innovation

Hackerearth Hackathons Learn more

Assessments

AI-driven advanced coding assessments

Hackerearth Assessments Learn more

FaceCode

Real-time code editor for effective coding interviews

Hackerearth FaceCode Learn more

L & D

Tailored learning paths for continuous assessments

Hackerearth Learning and Development Learn more