Powered by

padang-logo-RGB-colour.png

Contact us at challenge@padang.co

  • Facebook

Strategic Partners

MS-Azure_logo_stacked_c-gray_rgb.png
aws logo.png

PROBLEM STATEMENT

Given a dataset of distinct car images, can you automatically recognize the car model and make?

SUBMISSION DEADLINE

Please submit the final repository including documentation by or before 17 June 2019,
6.00pm (SGT)
.

Grab's platform offers a vibrant
set of income alternatives for our driver-partners. This provides us a unique opportunity to leverage advanced automated digitization.

How can we help passengers find their rides quicker and in turn increase earning opportunities for our driver-partners by automating the process of digitizing & understanding imagery on the highest possible quality?

You will be judged on the following criteria:

Code Quality

Code Quality, also known as Software Quality, is generally defined in two ways:
 

  • How well does the code conform to the functional specifications and requirements
    of a project.

  • Structural quality, which relates to the maintainability and robustness of the code.

Creativity in Problem-solving

Creativity speaks volumes about your capability to make sense of given data, derive tangible results relevant to the business needs of an organization and present the findings. All this, while keeping in mind the problem statements.

 

Check out our thought process behind these challenges in our short film!

Feature Engineering

 

Feature Engineering refers to the process of selecting and transforming variables when creating a data model for a given problem statement. While you will be given a general dataset which relates to the problem statement, you need to create “features” that make the models and algorithms work as intended. You can use standard features, including open source implementations, or create your own features -- or learn features automatically during training.

 

Note that your code needs to be self-contained, i.e. it should be able to automatically create your desired features, that can be used in the evaluation of the Hold-out test set.

Model Performance

Model performance determines how a model represents the data and how well the chosen model will work. In this challenge, we will be performing a Hold-out model evaluation. For this problem, you are given a training data set, and our evaluators will have a test data set (not seen by the model). This test dataset will assess the likely future performance of the model.

 

Note that your model must output a confidence score for every classification.
 

Submissions will be evaluated by accuracy, precision and recall. Given that several solutions have been published for this problem before, we recommend you emphasise how your solution differentiates, for example, according to the other listed evaluation criteria (originality, code quality, etc).

QUALIFICATION CRITERIA

  • Submit the correct link to your repository

  • Make sure your repository includes the complete codebase (all the commits are done, documentation, complete, etc)

  • Solve only one of the challenges mentioned on the website

  • Do not plagiarise the code. That will be grounds for instant disqualification

  • The link to your repository must be publicly accessibly from the time of submission.

SUBMISSION GUIDELINES

You can submit the code (either as a codebase or a Jupyter notebook) by uploading it to a public Github or similar repository. The instructions to submit the repository link will be sent to you via email once you accept the challenge on https://www.aiforsea.com/