Official implementation for "CP-VTON+: Clothing Shape and Texture Preserving Image-Based Virtual Try-On" from CVPRW 2020.
Project page: https://minar09.github.io/cpvtonplus/.
Saved/Pre-trained models: Checkpoints
- Install the requirements
- Prepare the dataset
- Train GMM network
- Get warped clothes for training set with trained GMM network, and copy inside
data/viton_resize/train
directory - Train TOM network
- Test/evaluate with test set
This implementation is built and tested in PyTorch 0.4.1.
Run pip install -r requirements.txt
- Run
python data_download.py
- Run
python dataset_neck_skin_correction.py
- Run
python body_binary_masking.py
Run python train.py
with your specific usage options for GMM and TOM stage.
For example, GMM: python train.py --name GMM --stage GMM --workers 4 --save_count 5000 --shuffle
and for TOM stage, python train.py --name TOM --stage TOM --workers 4 --save_count 5000 --shuffle
Run 'python test.py' with your specific usage options.
For example, GMM: python test.py --name GMM --stage GMM --workers 4 --datamode test --data_list test_pairs.txt --checkpoint checkpoints/GMM/gmm_final.pth
and for TOM stage: python test.py --name TOM --stage TOM --workers 4 --datamode test --data_list test_pairs.txt --checkpoint checkpoints/TOM/tom_final.pth
Download the pre-trained models from here: Checkpoints. Then run the same step as Testing to test our model.
Please cite our paper in your publications if it helps your research:
@InProceedings{Minar_CPP_2020_CVPR_Workshops,
title={CP-VTON+: Clothing Shape and Texture Preserving Image-Based Virtual Try-On},
author={Minar, Matiur Rahman and Thai Thanh Tuan and Ahn, Heejune and Rosin, Paul and Lai, Yu-Kun},
booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}
This implementation is largely based on the PyTorch implementation of CP-VTON. We are extremely grateful for their public implementation.