Zicheng Liu*,1,2, Siyuan Li*,1,2, Di Wu1,2, Zhiyuan Chen1, Lirong Wu1,2, Stan Z. Li†,1
We propose a novel automatic mixup (AutoMix) framework, where the mixup policy is parameterized and serves the ultimate classification goal directly. Specifically, AutoMix reformulates the mixup classification into two sub-tasks (i.e., mixed sample generation and mixup classification) with corresponding sub-networks and solves them in a bi-level optimization framework. For the generation, a learnable lightweight mixup generator, Mix Block, is designed to generate mixed samples by modeling patch-wise relationships under the direct supervision of the corresponding mixed labels. To prevent the degradation and instability of bi-level optimization, we further introduce a momentum pipeline to train AutoMix in an end-to-end manner. Extensive experiments on nine image benchmarks prove the superiority of AutoMix compared with state-of-the-arts in various classification scenarios and downstream tasks.
We plan to update this timm implementation of AutoMix in a few months. Please watch us for the latest release or use our OpenMixup implementations.
- Image Classification Code with OpenMixup [code]
- CIFAR-10/100 and Tiny-ImageNet Training and Validation Code with timm [code]
- ImageNet-1K Training and Validation Code [code]
- Image Classification on Google Colab and Notebook Demo
Please check INSTALL.md for installation instructions.
Please refer to OpenMixup implementations of CIFAR-100 and Tiny-ImageNet.
See TRAINING.md for ImageNet-1K training and validation instructions, or refer to our OpenMixup implementations. We released pre-trained models on OpenMixup.
Please refer to mixup_benchmarks in OpenMixup implementations for results and models.
This project is released under the Apache 2.0 license.
Our implementation is mainly based on the following codebases. We gratefully thank the authors for their wonderful works.
- pytorch-image-models: PyTorch image models, scripts, pretrained weights.
- OpenMixup: CAIRI Supervised, Semi- and Self-Supervised Visual Representation Learning Toolbox and Benchmark.
If you find this repository helpful, please consider citing:
@InProceedings{liu2022automix,
title={AutoMix: Unveiling the Power of Mixup for Stronger Classifiers},
author={Zicheng Liu and Siyuan Li and Di Wu and Zhiyuan Chen and Lirong Wu and Jianzhu Guo and Stan Z. Li},
booktitle={European Conference on Computer Vision},
pages={441--458},
year={2022},
}