Loading…

Towards automated computer vision: analysis of the AutoCV challenges 2019

•Review of AutoCV challenges, which propose a novel any-time metric.•Winner solutions with good generalisability are open-sourced.•Data augmentation has proven to help gain any-time performance.•Any-time metric has proven to be strictly harder than fix-time metric.•An enriching repository of 25 data...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition letters 2020-07, Vol.135, p.196-203
Main Authors: Liu, Zhengying, Xu, Zhen, Escalera, Sergio, Guyon, Isabelle, Jacques Junior, Julio C.S., Madadi, Meysam, Pavao, Adrien, Treguer, Sebastien, Tu, Wei-Wei
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•Review of AutoCV challenges, which propose a novel any-time metric.•Winner solutions with good generalisability are open-sourced.•Data augmentation has proven to help gain any-time performance.•Any-time metric has proven to be strictly harder than fix-time metric.•An enriching repository of 25 datasets are formatted to enable meta-learning research. We present the results of recent challenges in Automated Computer Vision (AutoCV, renamed here for clarity AutoCV1 and AutoCV2, 2019), which are part of a series of challenge on Automated Deep Learning (AutoDL). These two competitions aim at searching for fully automated solutions for classification tasks in computer vision, with an emphasis on any-time performance. The first competition was limited to image classification while the second one included both images and videos. Our design imposed to the participants to submit their code on a challenge platform for blind testing on five datasets, both for training and testing, without any human intervention whatsoever. Winning solutions adopted deep learning techniques based on already published architectures, such as AutoAugment, MobileNet and ResNet, to reach state-of-the-art performance in the time budget of the challenge (only 20 minutes of GPU time). The novel contributions include strategies to deliver good preliminary results at any time during the learning process, such that a method can be stopped early and still deliver good performance. This feature is key for the adoption of such techniques by data analysts desiring to obtain rapidly preliminary results on large datasets and to speed up the development process. The soundness of our design was verified in several aspects: (1) Little overfitting of the on-line leaderboard providing feedback on 5 development datasets was observed, compared to the final blind testing on the 5 (separate) final test datasets, suggesting that winning solutions might generalize to other computer vision classification tasks; (2) Error bars on the winners’ performance allow us to say with confident that they performed significantly better than the baseline solutions we provided; (3) The ranking of participants according to the any-time metric we designed, namely the Area under the Learning Curve, was different from that of the fixed-time metric, i.e. AUC at the end of the fixed time budget. We released all winning solutions under open-source licenses. At the end of the AutoDL challenge series, all data of the challenge will be
ISSN:0167-8655
1872-7344
DOI:10.1016/j.patrec.2020.04.030