Winners are announced.
Amazon Web Services (AWS) brings computer vision, natural language processing, speech recognition, text-to-speech, and machine translation within the reach of every developer. Application services by AWS enable developers to plug-in pre-built AI functionality into your apps without having to worry about the machine learning models that power these services.
Put your skills to the test and apply language and vision intelligence to a new or existing application! Use language and vision APIs including Amazon Comprehend, Amazon Transcribe, Amazon Polly, Amazon Lex, Amazon Translate, and Amazon Rekognition to gain customer insights, personalize content recommendations, identify celebrities, objects and scenes, and much more!
Use AWS Lambda and/or Amazon Alexa to run the code and business logic for your intelligent application. Using Lambda with machine learning services by AWS enables a serverless architecture, meaning you can run the application without having to manage, scale, or operate any servers or infrastructure.
See Resources to learn more about machine learning and serverless from AWS and Inspiration for some ideas on what you can build.
1.Register for the AWS Artificial Intelligence Challenge on this page.
2.Create an account on AWS
3.Learn about machine learning and serverless from AWS with documentation here in resources and consider inspiration on how to get started.
4.Build! Create a new project or add intelligence to an existing project. Shoot your demo video that demonstrates your project in action. Prepare a written summary of your application and what it does.
5.Provide a way to access your project for judging and testing. Include a link to your repo hosting the AWS ML service code and all deployment files and testing instructions needed for testing your project. (The Github or BitBucket code repository may be public or private. If the repository is private, share access with support@hackerearth.com).
6.Submit your project on www.hackerearth.com/sprints/aws-hackathon/ before January 6th, 2019 @ 11:55 PM IST and be sure to share the links to access to the repo and the deployment files.
User must use two or more of the following services: Amazon Comprehend, Amazon Transcribe, Amazon Polly, Amazon Lex, Amazon Translate, and/or Amazon Rekognition.
You must use AWS Lambda and/or Amazon Alexa to run your application's code. For example, you can use Lambda to call the language and vision API services when an event occurs (e.g. image is uploaded to S3, HTTP call). You can also use Lambda to process and store outputs into other AWS services.