Case Study

Building predictive models from banking and financial data

About Societe Generale

Societe Generale generates massive amounts of banking and financial data every day. SG wanted to put this data to better use by leveraging the power of crowdsourcing for data analysis and building predictive models. Having conducted physical hackathons for the past couple of years, Societe Generale wanted to scale its flagship event—Brainwaves—and reach a wider audience with a Machine Learning theme.

What’s Societe Generale's story?

Societe Generale Global Solutions Centre (SG GSC) is a subsidiary of Societe Generale—the French multinational banking and financial services company. SG GSC focuses on developing global best practices to promote the strategic objectives of the SG group in the long term. It provides services in the areas of Application Development, Infrastructure Management, Business Process Management, and Knowledge Process Management, to Societe Generale’s business lines around the world.


Build a mindshare in the Data Science community in india

Crowdsource innovative solutions

Reach out to the best developers from universities and working professionals alike

Machine Learning Hackathon as a solution

The hackathon was conducted on HackerEarth’s Data Science platform. Our platform allowed Societe Generale to

  • Create a customized Machine Learning (ML) challenge using their data
  • Manage and validate user submission efficiently

Customized auto-evaluation mechanism and metrics

Our ML platform is equipped with customized auto-evaluation mechanism. It allows you to put out data sets to public. The data is divided into two sets: training data set and test data set.

A training dataset is the data on which users train their models. After the models are trained, users are expected to predict on the test data set and submit their predictions.

After users submit their prediction files, the models are evaluated on 50% of the test data and scores are awarded to the participants in real-time.

Once the contest is over, models are evaluated on the remaining 50% test data set as well to award the final score to the submissions.

Note: The reason for evaluating only 50% of the data set during the online phase is to discourage overfitting by the users.

Read more about our evaluation mechanism for Machine Learning platforms, here.

30-hour build challenge

The event received an overwhelming response.

  • More than 50% of the participants were experience professionals
  • Students from the top 10 premier engineering institutes of India participated in the event

What was the outcome?

1893 Participants

306 Submissions

3 Winners

$10,000 Prize

“Happy to say our event was a success and HackerEarth was the right choice both from the platform’s and the people’s perspective.”

Rajesh Karuvat,
Sr. Vice President, Societe Generale

Why HackerEarth?

Sprint (as a managed service)

Large developer community

Innovate and build a better business