Participants are expected to choose any one problem statement and submit their solution. The submission file should include a ReadMe guiding the judges in using your code ( and details on how you solved the problem statement ) and should also include any dependencies of your main script file (Read the Rules for more details) . Here are the problem statements of the hackathon :
1) You are given three videos of T20 matches . Each video is approximately 1 hour and 40 minutes long and consists of one innings of a team (To understand the problem statement that follows, we suggest you get a glimpse of how the video looks first). A common task in video processing is to be able to summarize the given video, choosing only the “important” frames. We are interested in getting highlights of the match video, which are the deliveries corresponding to boundaries, sixes and wickets. To do this, the subtasks are as follows:
Hint : The score board gets updated at the end of each interested event. You can use this change to get the highlights.
Having done both these tasks, develop an algorithm to generate highlights for a match video.
You can find the videos here
2) You are given four sets of video clips. Each set has few short clips of a batsman playing a particular shot. The task is to identify the type of shot played ( Example : Cover drive, Pull shot etc.) by the batsman if the input to your algorithm is a short clip of a batsman playing a particular shot. The challenge here lies in the short amount of data that you have for your inferences.( You can try scraping the net for more such clips, but there is no such open dataset available). The main script file should take a video as an input and identify the shot played by the batsman . (Note : The given videos also consist of some initial useless frames which you can discard)
Hint : Use a pre-existing pose estimation algorithm to track the various changes in the pose of the batsman while playing a shot.
You can find the videos here
3) Chatbots are being deployed everywhere. Although lot of work has been done in developing intelligent bots, it doesn’t take long before a chatbot replies ‘I don’t understand your question’ to your question . You are expected to develop a chatbot that can provide the user score information and match statistics whenever it is prompted to.
For example, ‘What’s the score right now in the Ind vs Pak match?’ should result in the bot giving the score information (refer below for more details) and also be able to present a graph of number of runs scored in the overs completed. We DO NOT want the bot to return a google search result for a query about match information. Also, even though the bot may not give correct responses in a generic conversational reply, it should be able to give the right answer for any such query about match information.
Score information that the bot should give : current runs, overs bowled, if it’s a chase then the runs left to chase, run rate.
Hint – The chatbot should be able to get the information from a website that keeps track of scores (like cricinfo) and APIs to build chatbot are readily available.
4) The task is to develop an automated ticketing system based on face recognition. The accuracy of the recognition system is very crucial, and with lot of work already done in this regard, we want you to push this for effective deployment. The algorithm should also be able to differentiate between a real face and just a photo of a person, an important feature to prevent scams.
Types of tickets a person can buy : Hospitality Box, Pavilion End -Upper Deck, Pavilion End - Lower Deck, Grand Stand, Super Fan Box. The person first inputs a video 5 seconds long, along with his name and ticket purchased. The algorithm should add the person’s face to the database along with the type of ticket purchased. Once registered, the algorithm should be able to recognize the person and ticket purchased when we input a video. To be more clear, the output is a bounding box over the face, annotated with his name and type of ticket purchased. If the person’s face is not there in the database, the output annotation should be ‘Person Not Found’.
Hint – The algorithm should detect an action that can characterize an actual human from a photo. You can program your algorithm to ask the user to perform this action( a blink, for example )