A Picture is worth a thousand colors - Using Computer Vision to Color Tag E-Commerce Products |
||
|
|
|
© 2024 by IJCTT Journal | ||
Volume-72 Issue-10 |
||
Year of Publication : 2024 | ||
Authors : Sachin More, Jeyvinth Manoj Rayan | ||
DOI : 10.14445/22312803/IJCTT-V72I10P115 |
How to Cite?
Sachin More, Jeyvinth Manoj Rayan, "A Picture is worth a thousand colors - Using Computer Vision to Color Tag E-Commerce Products ," International Journal of Computer Trends and Technology, vol. 72, no. 10, pp. 94-100, 2024. Crossref, https://doi.org/10.14445/22312803/IJCTT-V72I10P115
Abstract
Product metadata tagging is one of the core components required for the e-commerce industry to fuel product discoverability, which drives key initiatives like personalized recommendations, search relevancy, pricing, SEO, decreasing customer dissatisfaction, etc. Color Tags of products are highly prominent in the Fashion section, where the optics of the product are given higher precedence. In a marketplace setup, it is difficult to ensure a good quality of color tagging for the products owing to the scale of products, language constraints, manual tagging errors, etc.
In this framework, we develop models that can extract the dominant colors of the apparel from the Product Images and enable automatic tagging of the products with these colors. Our approach has the following steps:1) U2-Net Image Segmentation algorithm to segment the foreground from the fashion image. 2(a)Extract the Hex color codes and RGB values for the extracted foreground and map the dominant RGB value to the nearest standardized color name using KD Tree-based clustering method. 2(b)Predict the Global color using a classification approach with a fine-tuned EfficientNet Model. We use this pipeline to predict the color label for the test dataset, which has pre-tagged, human-validated color labels that can be used as our golden set for validation. We evaluate the misclassification rate from this golden set against our predictions to assess the model performance. The approach is expected to achieve 26% percent of missing color imputation and suggest 11% percent of mislabeled color tags. This further translates to a 5% percentage of search results showing relevant results, leading to a considerable increase in the conversion owing to relevant results being shown.
Keywords
Image Segmentation, color clustering, Dominant Color extraction, E-commerce Fashion color extraction.
Reference
[1] Julie Delon et al., “Automatic Color Palette,” Proceedings IEEE International Conference on Image Processing 2005, Genova, Italy, pp. 2 706, 2005.
[CrossRef] [Google Scholar] [Publisher Link]
[2] Julie Delon et al., “Automatic Color Palette,” Inverse Problems and Imaging, vol. 1, no. 2, pp. 265-287, 2007.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Hoel Le Capitaine, and Carl Frélicot, “A Fast Fuzzy C- Means Algorithm for Color Image Segmentation,” Proceedings of the 7th Conference of the European Society for Fuzzy Logic and Technology (EUSFLAT-11), Atlantis Press, Aix-les- Bains, France, pp. 1074-1081, 2011.
[CrossRef] [Google Scholar] [Publisher Link]
[4] Kaiming He et al., “Mask R-CNN,” Proceedings IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 2, pp. 386 397, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[5] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” Proceedings International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Cham, pp. 234-241, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[6] Xuebin Qin et al., “U2-Net: Going Deeper with Nested U-Structure for Salient Object Detection,” Pattern Recognition, vol. 106, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[7] Martin Skrodzki, “The KD Tree Data Structure and A Proof for Neighborhood Computation in Expected Logarithmic Time,” arXiv, pp. 1-12, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[8] Mingxing Tan, and Quoc V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” arXiv, pp. 1-11, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[9] Ujjal Kr Dutta et al., “Color Variants Identification in Fashion E- Commerce via Contrastive Self-Supervised Representation Learning,” arXiv, pp. 1-8, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[10] Mohammed Al-Rawi, and Joeran Beel, “Probabilistic Color Modelling of Clothing Items,” Proceedings Recommender Systems in Fashion and Retail, Ireland, pp. 21-40, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[11] Marco Manfredi et al., “A Complete System for Garment Segmentation and Color Classification,” Machine Vision and Applications, vol. 25, pp. 955-969, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[12] Zhilan Hu, Hong Yan, and Xinggang Lin, “Clothing Segmentation Using Foreground and Background Estimation Based on The Constrained Delaunay Triangulation,” Pattern Recognition, vol. 41, no. 5, pp. 1581-1592, 2008. [CrossRef] [Google Scholar] [Publisher Link]
[13] Si. Liu et al., “Street-To-Shop: Cross-Scenario Clothing Retrieval via Parts Alignment and Auxiliary Set,” MM '12: Proceedings of the 20th ACM International Conference on Multimedia, pp. 1335-1336, 2012.
[CrossRef] [Google Scholar] [Publisher Link]
[14] Kota Yamaguchi et al., “Retrieving Similar Styles to Parse Clothing,” Proceedings IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 5, pp. 1028-1040, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[15] Carsten Rother, Vladimir Kolmogorov, and Andrew Blake, “GrabCut”: Interactive Foreground Extraction using Iterated Graph Cuts,” ACM Transactions on Graphics (TOG), vol. 23, no. 3, pp. 309-314, 2004.
[CrossRef] [Google Scholar] [Publisher Link]
[16] Sharon Lin, and Pat Hanrahan, “Modeling How People Extract Color Themes from Images,” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13), New York, USA, pp. 3101-3110, 2013.
[CrossRef] [Google Scholar] [Publisher Link]
[17] Qing Zhang et al., “Palette-Based Image Recoloring Using Color Decomposition Optimization,” IEEE Transactions on Image Processing, vol. 26, pp. 1952-1964, 2017.
[CrossRef] [Google Scholar] [Publisher Link]
[18] Ju-Mi Kang, and Youngbae Hwang, “Hierarchical Palette Extraction Based on Local Distinctiveness and Cluster Validation for Image Recoloring,” Proceedings 2018 25th IEEE International Conference on Image Processing (ICIP), pp. 2252-2256, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[19] Mu Gao et al., “A Hybrid Approach to Pedestrian Clothing Color Attribute Extraction,” Proceedings 2015 IAPR International Conference on Machine Vision Applications, Tokyo, Japan, pp. 18-22, 2015.
[CrossRef] [Google Scholar] [Publisher Link]
[20] Peihua Lai, and Stephen Westland, “Machine Learning for Colour Palette Extraction from Fashion Runway Images,” International Journal of Fashion Design, Technology and Education, vol. 13, no. 3, pp. 334-340, 2020.
[CrossRef] [Google Scholar] [Publisher Link]
[21] Daniel Bolya et al., “YOLOACT: Real-time Instance Segmentation,” arXiv, pp. 1-11, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[22] Cheng-Yang Fu, Mykhailo Shvets, and Alexander C. Berg, “RetinaMask: Learning to Predict Masks Improves State-of-the- Art Single-Shot Detection for Free,” arXiv, pp. 1-11, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[23] Wei-Dong Liu, and Xi-Shui She, “Application of Computer Vision on E-Commerce Platforms and Its Impact on Sales Forecasting,” Journal of Organizational and End User Computing (JOEUC), vol. 36, no. 1, pp. 1-20, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[24] Abon Chaudhuri et al., “A Smart System For Selection Of Optimal Product Images In E-Commerce,” Proceedings 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, 2018.
[CrossRef] [Google Scholar] [Publisher Link]
[25] Martin Danelljan et al., “Adaptive Color Attributes for Real-Time Visual Tracking,” Proceedings 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, pp. 1090-1097, 2014.
[CrossRef] [Google Scholar] [Publisher Link]
[26] Ivan Kuteynikov, and Marina Yashina, “Computational Complexity Optimization of Vehicle Video-Recognition Algorithm Using the Virtual Detectors Method,” Proceedings 2024 Systems of Signal Synchronization, Generating and Processing in Telecommunications (SYNCHROINFO), Vyborg, Russian Federation, pp. 1-6, 2024.
[CrossRef] [Google Scholar] [Publisher Link]
[27] Abon Chaudhuri et al., “A Smart System for Selection of Optimal Product Images in E-Commerce,” Proceedings 2018 IEEE International Conference on Big Data (Big Data), Seattle, WA, USA, pp. 1728-1736, 2018.
[CrossRef] [Google Scholar] [Publisher Link]