Diagnosing Gingiva Disease Using Artificial Intelligence Techniques
DOI:
https://doi.org/10.24237/djes.2024.18211Keywords:
InceptionV3, MobileNet, Periodontal Diseases, Sequential, VGG16Abstract
Gingival and periodontal diseases, such as gingivitis and periodontitis, are critical public health concerns that can lead to severe complications if left untreated. Early and precise diagnosis is crucial to mitigate the progression of these conditions and improve oral health outcomes. This study investigates the application of convolutional neural networks (CNNs) in diagnosing gingival diseases using medical images, including X-rays and intraoral photographs. Several CNN architectures, including VGG16, Sequential CNN, MobileNet, InceptionV3, and suggestions for a voting method to enhance the prediction, were evaluated for their performance in classifying gingival conditions. MobileNet emerged as the most effective model, achieving a test accuracy of 92.73%; the suggested method relies mainly on its positive result. When the MobileNet's result is false, the process takes the voting result using the other methods. This boosts the accuracy to 96%. Surpassing other models in precision and recall metrics. Pre-processing techniques such as normalization using the CIELAB color space and data augmentation significantly enhanced model accuracy. The study employed robust evaluation methods, including 10-fold cross-validation and hyperparameter tuning, to ensure model reliability and generalizability. The findings highlight the transformative potential of AI-powered diagnostic tools in dental healthcare. By leveraging lightweight and efficient architectures like MobileNet, these tools can be deployed in resource-limited settings, offering real-time diagnostic support to healthcare professionals. Future work will focus on expanding datasets, exploring ensemble models, and improving interpretability to further enhance diagnostic accuracy and clinical applicability.
Downloads
References
[1] S. MA, F. A., and B. M. Hassan, "Inflammation of the Gums," Malays. Fam. Physician, vol. 15, no. 1, pp. 71–73, Mar. 2020. nih.gov
[2] D. De Falco, F. Della Vella, M. Scivetti, C. Suriano, M. De Benedittis, and M. Petruzzi, "Non-Plaque Induced Diffuse Gingival Overgrowth: An Overview," Applied Sciences, vol. 12, no. 8, p. 3731, 2022. [Online]. Available: https://doi.org/10.3390/app12083731
[3] D. M. Alalharith, H. M. Alharthi, W. M. Alghamdi, et al., "A Deep Learning-Based Approach for the Detection of Early Signs of Gingivitis in Orthodontic Patients Using Faster Region-Based Convolutional Neural Networks," Int. J. Environ. Res. Public Health, vol. 17, no. 22, p. 8447, Nov. 2020. Doi: 10.3390/ijerph17228447.
[4] A. F. Mahmood, A. M. Alkababji, and A. Daood, "Resilient embedded system for classification of respiratory diseases in real time," Biomedical Signal Processing and Control, vol. 90, p. 105876, 2024. doi: 10.1016/j.bspc.2023.105876.
[5] G. E. Salvi, A. Roccuzzo, J.-C. Imber, A. Stähli, B. Klinge, and N. P. Lang, "Clinical periodontal diagnosis," Periodontol 2000, vol. 00, pp. 1–19, 2023. doi: 10.1111/prd.12487.
[6] E. Carillo, S. Dispo, R. Fallarco, K. Mateo, R. Garcia, M. Sejera, and F. Valiente, "Gum Disease Detection in the Front Part of the Mouth Using Convolutional Neural Network Through the Use of Keras with TensorFlow as Backend," in Proc. 2020 International Conference on Engineering Management and Information Science (EMIS), 2020, pp. 144-150. doi: 10.1145/3397391.3397429.
[7] A. Mohammed, A. Yasser, S. E. George, A. E. Shazly, N. E. Ashraf, and Y. Basim, "CNN-based Approach for Prediction of Periodontally Affected Teeth," in 2022 2nd International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC), Cairo, Egypt, 2022, pp. 147-152, doi: 10.1109/MIUCC55081.2022.9781782.
[8] D. Salunke, R. Joshi, P. Peddi, and D. Mane, "Deep Learning Techniques for Dental Image Diagnostics: A Survey," in Proc. 2022 International Conference on Artificial Intelligence and Smart Systems (ICAISS), 2022, pp. 244-257.doi: 10.1109/ICAISS55157.2022.10010576.
[9] H. Jayasinghe et al., "Effectiveness of Using Radiology Images and Mask R-CNN for Stomatology," in 2022 4th International Conference on Advancements in Computing (ICAC), Colombo, Sri Lanka, 2022, pp. 60-65, doi:10.1109/ICAC5768 5.2022.10025034.
[10] M. Revilla-León, M. Gómez-Polo, A. B. Barmak, et al., "Artificial intelligence models for diagnosing gingivitis and periodontal disease: A systematic review," J. Prosthet. Dent., vol. 130, no. 6, pp. 816-824, 2023, doi: 10.1016/j.prosdent.2022.01.026.
[11] K. J. Dsouza and Z. A. Ansari, "Histopathology image classification using hybrid parallel structured DEEP-CNN models," Applied Computer Science, 2022, vol. 18, no. 1, pp. 20-36. doi: 10.35784/acs-2022-2.
[12] S. Kalita, R. C. Singh, A. I. Abidi, and H. Sawhney, "A Novel Periodontal Disease Grade Classification Methodology using Convolutional Neural Network," International Journal on Recent and Innovation Trends in Computing and Communication, 2023, doi: 10.17762/ijritcc.v11i10.8806.
[13] W. Li, E. Guo, H. Zhao, et al., "Evaluation of transfer ensemble learning-based convolutional neural network models for identifying chronic gingivitis from oral photographs," BMC Oral Health, vol. 24, no. 814, 2024. https://doi.org/10.1186/s12903-024-04460-x.
[14] D. Roza and M. Soori, "Artificial Neural Network Systems," International Journal of Imaging and Robotics (IJIR), vol. 21, no. 2, pp. 13-25, 2021. [Online]. Available: hal-03349542.
[15] Z. Li, F. Liu, W. Yang, S. Peng, and J. Zhou, "A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects," IEEE Trans. Neural Networks Learn. Syst., vol. 33, no. 12, pp. 6999-7019, Dec. 2022, doi:10.1109/TNNLS.2021 .3084827.
[16] A. Khan, A. Sohail, U. Zahoora, et al., "A survey of the recent architectures of deep convolutional neural networks," Artif. Intell. Rev., vol. 53, pp. 5455–5516, 2020, doi: 10.1007/s10462-020-09825-6.
[17] M. M. Taye, "Theoretical Understanding of Convolutional Neural Network: Concepts, Architectures, Applications, Future Directions," Computation, vol. 11, no. 3, p. 52, 2023, doi: 10.3390/computation11030052.
[18] H S, Chandrashekar; A, Geetha Kiran; S, Murali; M S, Dinesh; B R, Nanditha (2021), "Oral Images Dataset," Mendeley Data, V2, doi 10.17632/mhjyrn35p4.2.
[19] S. Q. Hasan, "Shallow Model and Deep Learning Model for Features Extraction of Images," NTU Journal of Engineering and Technology, vol. 2, no. 3, pp. 1–6, Nov. 2023, doi: 10.56286/ntujet.v2i3.449.
[20] F. J. Rodríguez-Pulido, B. Gordillo, F. J. Heredia, and M. L. González-Miret, "CIELAB – Spectral image MATCHING: An app for merging colorimetric and spectral images for grapes and derivatives," Food Control, vol. 125, p. 108038, 2021, doi: 10.1016/j.foodcont.2021.108038.
[21] R. Velastegui, L. Yang, and D. Han, "The importance of color spaces for image classification using artificial neural networks: A review," in Proc. Int. Conf. Computational Science and Its Applications, Cham: Springer International Publishing, Sep. 2021, pp. 70–83. doi: 10.1007/978-3-030-86973-1_6.
[22] A. Al-Saegh, A. Daood, and M. H. Ismail, "Dual Optimization of Deep CNN for Motor Imagery EEG Tasks Classification," Diyala Journal of Engineering Sciences, vol. 17, no. 4, pp. 98–114, Dec. 2024, doi: 10.24237/djes.2024.17405.
[23] Z. Yang, H. Pan, J. Shang, J. Zhang, and Y. Liang, "Deep-Learning-Based Automated Identification and Visualization of Oral Cancer in Optical Coherence Tomography Images," Biomedicines, vol. 11, no. 3, p. 802, 2023, doi: 10.3390/biomedicines11030802.
[24] S. Li, Y. Guo, Z. Pang, W. Song, A. Hao, B. Xia, and H. Qin, "Automatic dental plaque segmentation based on local-to-global features fused self-attention network," IEEE Journal of Biomedical and Health Informatics, vol. 26, no. 5, pp. 2240–2251, May 2022, doi: 10.1109/JBHI.2022.3141773.
[25] A. J. Yousif and M. H. Al-Jammas, "Real-time Arabic Video Captioning Using CNN and Transformer Networks Based on Parallel Implementation," Diyala Journal of Engineering Sciences, vol. 17, no. 1, pp. 84–93, Mar. 2024, doi: 10.24237/djes.2024.17108.
[26] Q. H. Nguyen, H.-B. Ly, L. S. Ho, N. Al-Ansari, H. V. Le, V. Q. Tran, I. Prakash, and B. T. Pham, "Influence of Data Splitting on Performance of Machine Learning Models in Prediction of Shear Strength of Soil," Mathematical Problems in Engineering, vol. 2021, Article ID 4832864, 15 pages, 2021, doi: 10.1155/2021/4832864.
[27] S. E. Whang, Y. Roh, H. Song, et al., "Data collection and quality challenges in deep learning: a data-centric AI perspective," The VLDB Journal, vol.32, pp. 791–813, 2023, doi: 10.1007/s00778-022-00775-9.
[28] R. J. Kolaib and J. Waleed, "Crime Activity Detection in Surveillance Videos Based on Developed Deep Learning Approach," Diyala Journal of Engineering Sciences, vol. 17, no. 3, pp. 98–114, Sep. 2024, doi: 10.24237/djes.2024.17307.
[29] V. Zinchenko, S. Chetverikov, E. Akhmad, et al., "Changes in software as a medical device based on artificial intelligence technologies," International Journal of Computer Assisted Radiology and Surgery, vol. 17, pp. 1969–1977, 2022, doi: 10.1007/s11548-022-02669-1.
[30] W. Kusa et al., "Vombat: A tool for visualising evaluation measure behaviour in high-recall search tasks," in Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2023, doi: 10.1145/3539618.3591802.
[31] H. A. Saleh, O. I. Alsaif, and L. Y. Abdulkadir, "Adaptive control of a DC servo motor using particle swarm and gray wolf optimization algorithms," AIP Conference Proceedings, vol. 3264, no. 1, p. 040012, Mar. 2025, doi: 10.1063/5.0260224.
[32] H. Abdulkareem, O. I. Alsaif, L. Younis, and R. Khalid, "Performance Optimization of BLDC Motor Control Using Sand Cat Swarm Algorithm and Linear Quadratic Regulator," Journal of Robotics and Control (JRC), vol. 6, no. 1, Art. no. 1, Oct. 2024, doi: 10.18196/jrc.v6i1.24958.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Rana Khalid Sabri, Lujain Younis Abdulkadir, AbdulSattar Mohammed Khidhir, Hiba Abdulkareem Saleh

This work is licensed under a Creative Commons Attribution 4.0 International License.