SLFIR-demo-video_4.mp4
If you want to watch a higher quality video, you can click Demo Video to access youtube video or click Demo Video to download only 80M video to watch.
This repository is the official pytorch implementation of our paper, *Sketch Less Face Image Retrieval: A New Challenge*.
Please click on the link FS2K-SDE for the dataset.
The source code for this project is in the src
folder of the repository.
To facilitate anyone's reproduction and further study of our work, we provide the full training code
and test code
for stage1 and stage2 (only stage2 is included), the trained model files
and the log files
generated during the training process.
To train for stage1 simply enter the following command in the terminal:
CUDA_VISIBLE_DEVICES=0 python train.py \
--dataset_name Face-1000
--root_dir {your_root_proj_path}
--nTheads 4
--backbone_lr 5e-4
--lr 5e-3
--max_epoch 200
--feature_num 16
Or just edit the parser
parameter at the bottom of the train.py file and run it the way you like.
Parameters:
--dataset_name, type=str, default='Face-1000', help='Face-1000 / Face-450'
--root_dir, type=str, default='./', help='The root directory of the entire project file'
--nThreads, type=int, default=4
--backbone_lr, type=float, default=0.0005, help='Learning rate of the backbone network'
--lr, type=float, default=0.005, help='Learning rate of LSTM or MLP'
--max_epoch, type=int, default=200
--print_freq_iter, type=int, default=1, help='Step rate for printing debug messages'
--feature_num, type=int, default=16, help='Number of features in the last layer of the neural network'
To train for stage2 simply enter the following command in the terminal:
CUDA_VISIBLE_DEVICES=0 python train.py \
--dataset_name Face-1000
--root_dir {your_root_proj_path}
--batchsize 32
--nTheads 4
--lr 5e-4
--max_epoch 300
--feature_num 16
Or just edit the parser
parameter at the top of the train.py file and run it the way you like.
Parameters:
--dataset_name, type=str, default='Face-1000', help='Face-1000 / Face-450'
--root_dir, type=str, default='./', help='The root directory of the entire project file'
--batchsize, type=int, default=32
--nThreads, type=int, default=4
--lr, type=float, default=0.0005, help='Learning rate of LSTM or MLP'
--epoches, type=int, default=300
--feature_num, type=int, default=16, help='Number of features in the last layer of the neural network'
We only provide the validation code for stage2 and it comes with the tensorboard records (in the run directory) and log log files from our training process.
To eval for stage2 simply enter the following command in the terminal:
CUDA_VISIBLE_DEVICES=0 python train.py \
--dataset_name Face-1000
--root_dir {your_root_proj_path}
--batchsize 32
--nTheads 4
--lr 5e-4
--max_epoch 300
--feature_num 16
-
Release training code
-
Release testing code
-
Release pre-trained models
@INPROCEEDINGS{10095094, author={Dai, Dawei and Li, Yutang and Wang, Liang and Fu, Shiyu and Xia, Shuyin and Wang, Guoyin}, booktitle={ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, title={Sketch Less Face Image Retrieval: A New Challenge}, year={2023}, volume={}, number={}, pages={1-5}, keywords={Image retrieval;Signal processing;Acoustics;Task analysis;Speech processing;Faces;Sketch-Based Image Retrieval;Face Sketch;Partial Sketch;Sequence Learning}, doi={10.1109/ICASSP49357.2023.10095094}}
We would like to thank all of reviewers for their constructive comments and CQUPT for supporting.
This repo is currently maintained by Dawei Dai (dw_dai@163.com) and his master's student Yutang Li (2018211556@stu.cqupt.edu.cn).