Learning Attribute Representations With Localization for Flexible Fashion Search

Learning Attribute Representations With Localization for Flexible Fashion Search
Title:
Learning Attribute Representations With Localization for Flexible Fashion Search
Other Titles:
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Publication Date:
18 June 2018
Citation:
Ak, Kenan E., Ashraf A. Kassim, Joo Hwee Lim, and Jo Yew Tham. "Learning Attribute Representations with Localization for Flexible Fashion Search." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7708-7717. 2018.
Abstract:
In this paper, we investigate ways of conducting a detailed fashion search using query images and attributes. A credible fashion search platform should be able to (1) find images that share the same attributes as the query image, (2) allow users to manipulate certain attributes, e.g. replace collar attribute from round to v-neck, and (3) handle region-specific attribute manipulations, e.g. replacing the color attribute of the sleeve region without changing the color attribute of other regions. A key challenge to be addressed is that fashion products have multiple attributes and it is important for each of these attributes to have representative features. To address these challenges, we propose the FashionSearchNet which uses a weakly supervised localization method to extract regions of attributes. By doing so, unrelated features can be ignored thus improving the similarity learning. Also, FashionSearchNet incorporates a new procedure that enables region awareness to be able to handle region-specific requests. FashionSearchNet outperforms the most recent fashion search techniques and is shown to be able to carry out different search scenarios using the dynamic queries.
License type:
PublisherCopyrights
Funding Info:
Description:
(c) 2018 IEEE.
ISSN:
2575-7075
1063-6919
Files uploaded:

File Size Format Action
cvpr-final.pdf 3.02 MB PDF Open