The IPC 2014 Results
Competitors were judged on 6 domains chosen from previous competitions: elevators, floortile, nomystery, parking, spanner, and transport.  At the start of a six-week learning stage, competitors were provided generators for these domains, a representative set of training problems, and guidelines for the evaluation distributions. Errors in the domains and generators were corrected at this point.  After the learning stage was complete, competitors were provided runs from selected training problems to ensure that their planner was performing as expected.  Problems found in those runs were corrected before collecting the final results.

For the final evaluation, 5 problems from each domain were randomly generated from the distributions, resulting in 30 problem instances.  The planners were run on the EC2 cloud compute platform with the support of a generous grant from Amazon Web Services; each compute platform had a compute equivalent of 2 cores and 3.75 GB memory and ran Ubuntu 12.04 LTS.  To account for variations in the actual computing resources on the cloud platform, each planning system was run 30 times with and without domain knowledge on each problem instance.   There are three categories of awards, each with a first, second, and third place.

You can view the final quality table, as well as the best runs by planner.  Although it was not used to determine a winner for this competition, you may want to view the table of planner coverage.

The overall best quality award compares planners on the quality of the best plan they produced for each problem.  The awards for best overall quality (out of a possible score of 30) go to:

    First Place - MIPlan (Quality: 21.88)
    Second Place - Fast Downward Cedalion (Quality: 19.98)
    Third Place - Fast Downward SMAC (Quality: 17.45)

The best learner award compares planners on the learning delta between their overall improvement on plan quality when knowledge was applied over when it was not.  To ensure that the baseline performance without knowledge was fair, any problem solved by seven or more planners was removed from this calculation, resulting in 24 problem instances. The awards for best learner go to:

    First Place - Fast Downward Cedalion (Adjusted Quality Delta: 10.40)
    Second Place - Eroller (Adjusted Quality Delta: 9.97)
    Third Place - Fast Downward SMAC (Adjusted Quality Delta: 9.18)

Finally, the best basic solver award goes to the planning system that used only a single core algorithm.  The awards for best core solver go to:

    First Place - Fast Downward SMAC (Quality: 17.45)
    Second Place - LLama (Quality: 14.30)
    Third Place - Eroller (Quality: 12.51)

Specific results files can be found for each planner


Planner (BASIC?)
 Results Files
AGAP
Test Runs / First 15 runs / Second 15 runs
BadFF (BASIC)
BadFF-p
Test Runs / First 15 runs / Second 15 runs
Test Runs / First 15 runs / Second 15 runs
Ensemble-Roller (BASIC)
Test Runs / First 15 runs / Second 15 runs
Fast Downward Cedalion
Test Runs / First 15 runs / Second 15 runs
Fast Downward SMAC (BASIC)
Test Runs / First 15 runs / Second 15 runs
LIBaCOP
LIBaCOP2
Test Runs / First 15 runs / Second 15 runs
Test Runs / First 15 runs / Second 15 runs
LLAMA (BASIC)
Test Runs / First 15 runs / Second 15 runs
MIPlan
Test Runs / First 15 runs / Second 15 runs
RollEnt (BASIC)
Test Runs / First 15 runs / Second 15 runs