Image Credit Squidoodle
Read our development blog here:
AI has made significant progress in recent years, reaching superhuman performance on a wide range of tasks. Humans are no longer the best Go players, quiz-show contestants, or even, in some respects, the best doctors. Yet state-of-the art AI cannot compete with simple animals at adapting to unexpected changes in the environment. This competition pits our best AI approaches against the animal kingdom to determine if the great successes of AI are now ready to compete with the great successes of evolution at their own game.
The Playground (early version)
We are proposing a new kind of AI competition. Instead of providing a specific task, we will provide a well-defined arena (available at the end of April) and a list of cognitive abilities that we will test for in that arena. The tests will all use the same agent with the same inputs and actions. The goal will always be to retrieve the same food items by interacting with previously seen objects. However, the exact layout and variations of the tests will not be released until after the competition.
We expect this to be hard challenge. Winning this competition will require an AI system that can behave robustly and generalise to unseen cases. A perfect score will require a breakthrough in AI, well beyond current capabilities. However, even small successes will show that it is possible, not just to find useful patterns in data, but to extrapolate from these to an understanding of how the world works.
July 8th: Submission now available. See here for more details.
July 1st: We have officially released the competition (v1.0 is available and prize information and all categories have been announced). We are still testing a few things with the online submission site. First submissions should be possible by July 8th.
April 30th: We are excited to announce partnership with the Whole Brain Architecture Initiative who are sponsoring a standalone prize of $4,000 for the most biologically plausible entry. We hope this will lead to some exciting entries that not only behave like their biological counterparts, but are structured like them as well. The WBA prize winner will not need to be the best overall agent in the competition but will have to pass some simple baseline to be competitive. More details about this extra prize will be released closer to the competition start date at the end of June.
Experimental environments will be created in the Unity ML-Agents Toolkit, and the specification of the building blocks for all tasks will be freely available to all participants of the competition. Participants are allowed to use any methods and to experiment as much as they like in preparation for the competition, but the exact details of the tests will be kept secret. Participant performance will be measured on a range of tasks from simple combinations of the building blocks to complex tasks, each designed to probe certain cognitive abilities. Top prize will be for performance across the range of tests.
|Thanks to Amazon for sponsoring the research credit prizes and also the infrastructure for running all the tests required over the course of the competition.|
|Thanks to Unity Technologies for sponsoring prizes and providing the environment, both the Unity Platform and ML-Agents which we have used to build the competition environment.|
|Thanks to the Whole Brain Architecture Initiative for sponsoring a prize for the most biologically inspired entry.|
|Artificial Intelligence Journal||Thanks to AIJ for funding to be put towards travel prizes to bring top competition entrants together to present their work.|
|Thanks to EvalAI for hosting the competition.|
|Thanks to GoodAI for supporting the competition from its initial planning stages. Without them it would not be possible to grow into the project that it has become.|
|Thanks to the Leverhulme Trust and the Leverhulme Centre for the Future of Intelligence for funding the research that went into designing and building this competition.|