Toward Adaptive Manufacturing Development: Implementation of Artificial Intelligence for Identifying Leather Defects
Abstract
Artificial intelligence was the powerful approach that was proven to be impactful for solving several problems. In the leather inspection cases, artificial intelligence also contributed some research works that effected for leather inspection process. In this research, we employed NasNet architecture conducted by using fine-tunning transfer learning method to distinguish the types of leather defects. We used 3600 images that was distributed into six classes which are folding marks, grain off, growth marks, loose grains, pinhole and non-defective. Our proposed solution successfully achieved accuracy for training data is 0.9788 with loss of 0.0198. While the maximum accuracy in validation data is 0.8059 with loss of 0.2126. In the testing data, our experiment obtained accuracy of 0.8603 with loss of 0.1603. These results indicated that our proposed solution was suitable to recognize the characteristics of leather defects and suitable to distinguish them.
Downloads
References
[2] S.T. Liong, D. Zheng, Y. C. Huang, and Y. S. Gan, Leather defect classification and segmentation using deep learning architecture, Int. J. Comput. Integr. Manuf., vol. 33, no. 10 - 11, pp. 1105 - 1117, 2020, doi: 10.1080/0951192X.2020.1795928.
[3] M. Jawahar, N. K. C. Babu, and K. Vani, Leather texture classification using wavelet feature extraction technique, in 2014 IEEE International Conference on Computational Intelligence and Computing Research, 2014, pp. 1 - 4. doi: 10.1109/ICCIC.2014.7238475.
[4] H.Q. Bong, Q.B. Truong, H.C. Nguyen, and M.T. Nguyen, Vision-based Inspection System for Leather Surface Defect Detection and Classification, in 2018 5th NAFOSTED Conference on Information and Computer Science (NICS), 2018, pp. 300 - 304. doi: 10.1109/NICS.2018.8606836.
[5] S. T. Liong, Y. S. Gan, Y. C. Huang, K. H. Liu, and W. C. Yau, Integrated Neural Network and Machine Vision Approach For Leather Defect Classification, 2019, [Online]. Available: http://arxiv.org/abs/1905.11731.
[6] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le, Learning Transferable Architectures for Scalable Image Recognition, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 8697 - 8710, 2018, doi: 10.1109/CVPR.2018.00907.
[7] M. Rahman, Y. Cao, X. Sun, B. Li, and Y. Hao, Deep pre-trained networks as a feature extractor with XGBoost to detect tuberculosis from chest X-ray, Comput. Electr. Eng., vol. 93, p. 107252, 2021, doi: https://doi.org/10.1016/j.compeleceng.2021.107252.
[8] U. K. Lopes and J. F. Valiati, Pre-trained convolutional neural networks as feature extractors for tuberculosis detection, Comput. Biol. Med., vol. 89, pp. 135 - 143, 2017, doi: https://doi.org/10.1016/j.compbiomed.2017.08.001.
[9] X. Liu, C. Wang, J. Bai, and G. Liao, Fine-tuning Pre-trained Convolutional Neural Networks for Gastric Precancerous Disease Classification on Magnification Narrow-band Imaging Images, Neurocomputing, vol. 392, pp. 253 - 267, 2020, doi: https://doi.org/10.1016/j.neucom.2018.10.100.
[10] V. G. Buddhavarapu and A. A. J. J, An experimental study on classification of thyroid histopathology images using transfer learning, Pattern Recognit. Lett., vol. 140, pp. 1 - 9, 2020, doi: https://doi.org/10.1016/j.patrec.2020.09.020.
[11] Y. Wen, L. Chen, Y. Deng, and C. Zhou, Rethinking pre-training on medical imaging, J. Vis. Commun. Image Represent., vol. 78, p. 103145, 2021, doi: https://doi.org/10.1016/j.jvcir.2021.103145.
[12] L. Alzubaidi et al., Towards a Better Understanding of Transfer Learning for Medical Imaging: A Case Study, Applied Sciences , vol. 10, no. 13. 2020. doi: 10.3390/app10134523.
[13] L. Alzubaidi et al., Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data, Cancers (Basel)., vol. 13, no. 7, p. 1590, Mar. 2021, doi: 10.3390/cancers13071590.
[14] O. Russakovsky et al., ImageNet Large Scale Visual Recognition Challenge, Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, 2015, doi: 10.1007/s11263-015-0816-y.
[15] Keras, Keras Applications, Keras API, 2021. https://keras.io/api/applications/ (accessed Aug. 12, 2021).
Copyright (c) 2023 Jurnal Ecotipe (Electronic, Control, Telecommunication, Information, and Power Engineering)
This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright in each article is the property of the author.
- The author acknowledges that the Jurnal Ecotipe (Electronic, Control, Telecommunication, Information, and Power Engineering) has the right to publish for the first time with a Creative Commons Attribution 4.0 International License.
- The author can enter the writing separately, regulate the non-exculsive distribution of manuscripts that have been published in this journal into other versions (for example: sent to the author's institution respository, publication into books, etc.), by acknowledging that the manuscript was first published in the Jurnal Ecotipe (Electronic, Control, Telecommunication, Information, and Power Engineering);