Beat Auto-sklearn beta (CiML workshop challenge, winner: Sebastien)

Organized by lise_sun - Current server time: March 24, 2017, 5:44 p.m. UTC

First phase

Final
Dec. 7, 2016, 11 a.m. UTC

End

Competition Ends
Dec. 9, 2016, 5 p.m. UTC

Auto-sklearn against the world!

Beat

This is a beta version of the Beat Auto-sklearn challenge. Up to now, the first place of the Beat AutoSKlearn challenge leaderboard is still occupied by Auto-sklearn! We invite you to come together and build a SUPER ENSEMBLE MODEL to beat Auto-sklearn!

In a nutshell, Auto-sklearn uses state-of-the-art Bayesian optimization to configure a flexible machine learning pipeline implemented in scikit-learn.The original Beat Auto-sklearn challenged participants to select MANUALLY the hyperparameters of a supervised classification task.

Now, with this beta version, you have more possibilities: instead of tweaking by hand, we invite participants to write their own code for optimizing the hyperparameters of the same task.

 

CHALEARN

This challenge is brought to you by ChaLearn. Contact the organizers.

Evaluation

As a demonstration of the big 'coopetition' concept, the goal of this challenge is to gather up top solutions and build a super ensemble to beat the Auto-sklearn. Therefore, we rank participants by their contribution to the ensemble result, namely, their 'points'. However, if several participants happen to have the same final points, they will be ranked by their solo performance, i.e. their 'score'.

More precisely, we maintain a 'bag' of good models, i.e. our SUPER ENSEMBLE MODEL: every time a participant submits a model, we will compute a 'tentative ensemble performance' to see if this submission will help the ensemble result. If so, we include it to the 'bag', update the 'ensemble performance', reward the submission, and increase the user's point by 1. Otherwise, the model will not be considered for the ensemble.

Our final goal is to achieve a ensemble score better than 0.7815 -- Auto-sklearn's score.

The leaderboard

The following scores will be shown on the leaderboard:
  1. Points

    This is what determines the final rankings.

    It is the accumulated reward of a user: every time the user submits a good model, she / he earns 1 point. If none of her / his submissions is considered helpful, the point stays 0. As the ensemble result gets improved with time, it would become more difficult to earn points. This is how we encourage people to participate early.

  2. Score

    This is the solo performance of a participant's submission. It's computed by the AutoML scoring system, as described here.

  3. Ensemble score

    It is the performance of the ensemble at the moment of submission, denoted as Sp(t).

    Let's say a participant provides a model which returns prediction Puser at time t, then the tentative prediction at t is defined as:

    Ptent(t) = [nc/(nc+1)]*P(t-1) + [1/(nc+1)]*Puser

    if Sp_ten(t) > Sp(t-1), then P(t) = Ptent(t), nc = nc+1, reward = 1, Sp(t) = Sp_ten(t)

    otherwise: P(t) = P(t-1), reward = 0, Sp(t) = Sp(t-1)

    where P(t) and P(t-1) are ensemble predictions respectively at t and t-1, nis the number of classifiers in the 'bag'. 

  4. Submission rewarded?

    It indicates if the current submission is considered a good model (therefore be included into the ensemble). 1 for yes, 0 for no.

 

 

 

 

 

CHALEARN

This challenge is brought to you by ChaLearn. Contact the organizers.

Terms and Conditions Page

Challenge Rules

  • General Terms: This challenge is governed by the General ChaLearn Contest Rule Terms, the Codalab Terms and Conditions, and the specific rules set forth.
  • Announcements: To receive announcements and be informed of any change in rules, the participants must provide a valid email.
  • Conditions of participation: Participation requires complying with the rules of the challenge. Top 3 winners will be invited for dinner on Friday everning, December 9, 2016.
  • Workshop: The participants are encouraged to attend the CiML workshop at NIPS Barcelona, 2016.
  • Registration: The participants must register to See4C competition site and provide a valid email address. Teams must register only once and provide a group email, which is forwarded to all team members.
  • Anonymity: The participants who do not present their results at the workshop can select to remain anonymous by using a pseudonym.
  • Submission method: The results must be submitted through this See4C competition site. The participants can make up to 20 submissions per day. Using multiple accounts to increase the number of submissions in NOT permitted. The entries must be formatted as specified on the Evaluation page.
  • Awards: Free dinner on Friday evening!

 

CHALEARN

This challenge is brought to you by ChaLearn. Contact the organizers.

Final

Start: Dec. 7, 2016, 11 a.m.

Description: Single phase challenge.

Competition Ends

Dec. 9, 2016, 5 p.m.

You must be logged in to participate in competitions.

Sign In