|Authors||Holim Lim﹡, Jeeseung Han﹡, Sang-goo Lee|
|Keywords||Multi-label classification, Semantic segmentation, Fashion|
|Publication||International Conference on Ubiquitous Information Management and Communication (IMCOM 2019), pp. 1092-1099|
* These authors contributed equally.
The rapid growth of the online fashion market has raised the demand for fashion technologies, such as clothing attribute tagging. However, handling fashion image data is challenging since fashion images likely contain irrelevant backgrounds and involve various deformations. In this paper, we introduce SisterNetwork, a deep learning model to tackle the multi-label classification task for fashion attribute tagging. The proposed model consists of two different CNNs to leverage both the original image and the semantic segmentation information. We evaluate our model on the DCSA dataset which contains tagged fashion images, and we achieved the state-of-the-art performance on the multi-label classification task.