TransEns-Network: An Optimized Light-weight Transformer and Feature Fusion Based Approach of Deep Learning Models for the Classification of Oral Cancer

Authors

  • Karnika Dwivedi 2Department of Computer Science & Engineering, KIET Group of Institutions, Ghaziabad, Delhi-NCR, India
  • Bharti Chugh Department of Computer Science & Engineering, KIET Group of Institutions, Ghaziabad, Delhi-NCR, India
  • Anugrah Srivastava School of Computer Science Engineering and Technology, Bennett University, Greater Noida, India
  • Jai Prakash Pandey Dr A P J Abdul Kalam Technical, University, Lucknow, Uttar Pradesh, India

Keywords:

Deep learning, classification, feature fusion, vision transformer, oral cancer

Abstract

Oral cancer is one of the most dangerous types of cancer that can threaten human health. The diagnosis of cancer and its possible disorder at an early stage is required to increase the survival rate. The transformer models are so popular in computer vision applications because of their ability to extract long-range relationships among data points. The combination of CNN and trans-former has attracted extensive study in classifying medical images on the limited dataset. In this work, a lightweight, fast and robust automatic transform-er-based network has been designed. The proposed network utilizes the capabilities of CNN models for feature extraction with transformer model to create a fusion of transformer and convolution model for the classification of oral cancer images. The transformer network can capture both the local and global dependence of the image features. The joint efforts of the transformer and CNN in extracting the features from images reduce the computation cost and complexity as well as increase the performance of the presented model. The performance of the model is tested on an unseen test set of a publicly available dataset of oral cancer. The result findings prove that the combined structure of CNN and transformer model can extract more discriminatory features which helps in improving the classification performance of the model. The comparative analysis with other state-of-the-art models determines that the proposed model achieved competitive performance among these considered models. In addition, the presented approach can be beneficial in the diagnosis system for the detection of oral cancer at an early stage as it is effective and can give more accurate predictions.

References

Bray, F., Ferlay, J., Soerjomataram, I., Siegel, R. L., Torre, L. A., & Jemal, A. (2018). Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: a cancer journal for clinicians, 68(6), 394-424. https://doi.org/10.3322/caac.21492 (2018)

Ren, Z. H., Hu, C. Y., He, H. R., Li, Y. J., & Lyu, J. (2020). Global and regional burdens of oral cancer from 1990 to 2017: Results from the global burden of disease study. Cancer Communications, 40(2-3), 81-92 https://doi.org/10.1002/cac2.12009 (2020)

Ibayashi, H., Pham, T. M., Fujino, Y., Kubo, T., Ozasa, K., Matsuda, S., & Yoshimura, T. (2011). Estimation of premature mortality from oral cancer in Japan, 1995 and 2005. Cancer Epidemiology, 35(4), 342-344.https://doi.org/10.1016/j.canep.2011.01.010 (2011)

Rao, S. V. K., Mejia, G., Roberts-Thomson, K., & Logan, R. (2013). Epidemiology of oral cancer in Asia in the past decade-an update (2000-2012). Asian Pacific journal of cancer prevention, 14(10), 5567-5577.https://doi.org/10.7314/apjcp.2013.14.10.5567 (2013)

Razmjooy, N., Sheykhahmad, F. R., & Ghadimi, N. (2018). A hybrid neural network–world cup optimization algorithm for melanoma detection. Open Medicine, 13(1), 9-16

Huang, Q., Ding, H., & Razmjooy, N. (2023). Optimal deep learning neural network using ISSA for diagnosing the oral cancer. Biomedical Signal Processing and Control, 84, 104749. https://doi.org/10.1016/j.bspc.2023.104749

Kirubabai, M. P., & Arumugam, G. (2021). Deep learning classification method to detect and diagnose the cancer regions in oral MRI images. Med. Leg. Update, 21, 462-468

Gupta, R. K., & Manhas, J. (2021). Improved classification of cancerous histopathology images using color channel separation and deep learning. Journal of Multimedia Information System, 8(3), 175-182

De Fauw, J., Ledsam, J. R., Romera-Paredes, B., Nikolov, S., Tomasev, N., Blackwell, S., ... & Ronneberger, O. (2018). Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature medicine, 24(9), 1342-1350

Ilhan, B., Lin, K., Guneri, P., & Wilder-Smith, P. (2020). Improving oral cancer outcomes with imaging and artificial intelligence. Journal of dental research, 99(3), 241-248

Song, B., Sunny, S., Li, S., Gurushanth, K., Mendonca, P., Mukhia, N., ... & Liang, R. (2021). Bayesian deep learning for reliable oral cancer image classification. Biomedical Optics Express, 12(10), 6422-6430

Tanriver, G., Soluk Tekkesin, M., & Ergen, O. (2021). Automated detection and classification of oral lesions using deep learning to detect oral potentially malignant disorders. Cancers, 13(11), 2766

Camalan, S., Mahmood, H., Binol, H., Araujo, A. L. D., Santos-Silva, A. R., Vargas, P. A., ... & Gurcan, M. N. (2021). Convolutional neural network-based clinical predictors of oral dysplasia: Class activation map analysis of deep learning re-sults. Cancers, 13(6), 1291

Lim, J. H., Tan, C. S., Chan, C. S., Welikala, R. A., Remagnino, P., Rajendran, S., ... & Barman, S. A. (2021). D’OraCa: deep learning-based classification of oral lesions with mouth landmark guidance for early detection of oral cancer. In Medical Image Understanding and Analysis: 25th Annual Conference, MIUA 2021, Oxford, United Kingdom, July 12–14, 2021, Proceedings 25 (pp. 408-422). Springer International Publishing

Lin, H., Chen, H., Weng, L., Shao, J., & Lin, J. (2021). Automatic detection of oral cancer in smartphone-based images using deep learning for early diagnosis. Journal of Biomedical Optics, 26(8), 086007-086007

https://www.kaggle.com/datasets/shivam17299/oral-cancer-lips-and-tongue-images

Wilder‐Smith, Petra, et al. "In vivo diagnosis of oral dysplasia and malignancy using optical coherence tomography: preliminary studies in 50 patients." Lasers in Surgery and Medicine: The Official Journal of the American Society for Laser Medicine and Surgery 41.5 (2009): 353-357

Heidari, Andrew Emon, et al. "Optical coherence tomography as an oral cancer screening adjunct in a low resource settings." IEEE Journal of Selected Topics in Quantum Electronics 25.1 (2018): 1-8

Jeyaraj, Pandia Rajan, and Edward Rajan Samuel Nadar. "Computer-assisted medical image classification for early diagnosis of oral cancer employing deep learning algorithm." Journal of cancer research and clinical oncology 145 (2019): 829-837

Tschandl, Philipp, et al. "Comparison of the accuracy of human readers versus machine-learning algorithms for pigmented skin lesion classification: an open, web-based, international, diagnostic study." The lancet oncology 20.7 (2019): 938-947

Das, Dev Kumar, et al. "Automatic identification of clinically relevant regions from oral tissue histological images for oral squamous cell carcinoma diagnosis." Tissue and Cell 53 (2018): 111-119

Song, Bofan, et al. "Automatic classification of dual-modalilty, smartphone-based oral dysplasia and malignancy images using deep learning." Biomedical optics express 9.11 (2018): 5318-5329

Dwivedi, Karnika, Malay Kishore Dutta, and Jay Prakash Pandey. "EMViT-Net: A novel transformer-based network utilizing CNN and multilayer perceptron for the classification of environmental microorganisms using microscopic images." Ecological Informatics 79 (2024): 102451

Dwivedi, Karnika, and Malay Kishore Dutta. "Microcell‐Net: A deep neural network for multi‐class classifica-tion of microscopic blood cell images." Expert Systems 40.7 (2023): e13295

Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recogni-tion." arXiv preprint arXiv:1409.1556 (2014)

Huang, Gao, et al. "Densely connected convolutional networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017

He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. Szegedy, Christian, et al. "Rethinking the inception architecture for computer vision." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.

Downloads

Published

2024-07-31

How to Cite

Karnika Dwivedi, Bharti Chugh, Anugrah Srivastava, & Jai Prakash Pandey. (2024). TransEns-Network: An Optimized Light-weight Transformer and Feature Fusion Based Approach of Deep Learning Models for the Classification of Oral Cancer. International Journal on Computational Modelling Applications, 1(1), 32–44. Retrieved from https://submissions.adroidjournals.com/index.php/ijcma/article/view/17

Issue

Section

Research Articles