Localize items in images and query visual similarity items from a dataset.
-
Paper:
- ModaNet: A Large-scale Street Fashion Dataset with Polygon Annotations. a big dataset with bounding boxes. Get image from PaperDoll dataset -> select images with a single person -> select high quality images -> annotation.
- Paper Doll Parsing: Retrieving Similar Styles to Parse Clothing Items. Using spatial descriptors for style representation -> KNN to predicts tags -> Parsing images (learn from similar images): global, nn, transferred.
- Where to buy it: Martching Street Clothing Photos in Online Shops: Finding bounding boxes of items in images -> Find the similars of bounding box base on: FC6 of network or learn the similarity using sampling method
- Pinterest automatic object detection
- Pinterest visual len
-
Data set:
- https://github.com/eBay/modanet (annotation)
- https://github.com/kyamagu/paperdoll/tree/master/data/chictopia (images)
- https://tianchi.aliyun.com/competition/entrance/231670/information (Tianchi competition)
- http://www.tamaraberg.com/street2shop/ (stree to shop)
- https://storage.googleapis.com/openimages/web/visualizer/index.html?set=train&c=%2Fm%2F01h8tj (Open Images V4 dataset)
- https://github.com/switchablenorms/DeepFashion2
- https://fashionpedia.github.io/home/Fashionpedia_download.html