World Journal of Dentistry
Volume 12 | Issue 3 | Year 2021

Periapical Lesion Diagnosis Support System Based on X-ray Images Using Machine Learning Technique

Vo TN Ngoc1, Do H Viet2, Le K Anh3, Dinh Q Minh4, Le L Nghia5, Hoang K Loan6, Tran M Tuan7, Tran T Ngan8, Nguyen T Tra9

1Department of Pediatric Dentistry, School of Odonto-stomatology, Hanoi Medical University, Dong Da, Hanoi, Vietnam
2,6Department of Oral Surgery, School of Odonto-stomatology, Hanoi Medical University, Dong Da, Hanoi, Vietnam
3,4School of Odonto-stomatology, Hanoi Medical University, Dong Da, Hanoi, Vietnam
5Department of Periodontics, School of Odonto-stomatology, Hanoi Medical University, Dong Da, Hanoi, Vietnam
7Department of Information System, Faculty of Information Technology, Thuyloi University, Dong Da, Hanoi, Vietnam
8Department of Computer Science, Faculty of Information Technology, Thuyloi University, Dong Da, Hanoi, Vietnam
9Department of Prosthodontics, School of Odonto-stomatology, Hanoi Medical University, Dong Da, Hanoi, Vietnam

Corresponding Author: Nguyen T Tra, Department of Prosthodontics, School of Odonto-stomatology, Hanoi Medical University, Dong Da, Hanoi, Vietnam, Phone: +84-963-036-443, e-mail: nguyenthutra@hmu.edu.vn

How to cite this article Ngoc VTN, Viet DH, Anh LK, et al. Periapical Lesion Diagnosis Support System based on X-ray Images Using Machine Learning Technique. World J Dent 2021;12(3):189–193.

Source of support: Department of Science and Technology, Hanoi City and School of Odonto-stomatology, Hanoi Medical University, Dong Da, Hanoi, Vietnam

Conflict of interest: None


Aim and objective: The application of artificial intelligence (AI) in diagnosis support is the new approach in telemedicine which is meaningful in disadvantaged areas where are lacking health workers. However, the number of studies investigating the validity of AI in diagnosis support is still few. Periapical lesions, a common dental disease, are conventionally detected by dentists through radiography. This study aimed to assess the application of AI in support the diagnosis of periapical lesions.

Materials and methods: One hundred and thirty bite-wing images were recruited to evaluate the sensitivity, specificity, and accuracy of diagnosis provided by DentaVN software. Diagnoses provided by dentists were defined as references.

Results: The sensitivity, specificity, and accuracy of diagnoses of the software were 89.5, 97.9, and 95.6%, respectively.

Conclusion: DentaVN can be used as a support tool in diagnosis periapical lesions.

Clinical significance: To support the diagnosis of periapical diseases in disadvantaged areas where lack of dentists.

Keywords: Bite-wing image, DentaVN, Disadvantaged areas, Periapical lesions, Support diagnosis..


In dentistry and medicine, diagnosis is the crucial step for planning treatment options. To have accurate diagnoses, not only clinical examination but also radiography is highly important. Vietnam, like many other developing countries, is facing a lack of health workers in rural areas. Consultation with experienced clinicians is a possible option, but not always available. Thus, the project of the application of artificial intelligence (AI) in detecting lesions in radiographs has been developed by the Vietnamese Ministry of Health to reduce the diagnosis time and the workload for doctors.

Nowadays, many computer programs have been developed to support the diagnosis or directly detect the diseases from images. For instance, in 2006, Polat and Güneş1 used the least square support vector machine (LS-SVM) algorithm to classify cancer. The accuracy of this algorithm is 98.53%. In 2017, Esteva et al.2 applied deep convolutional neural networks (CNNs) in a dataset consisting of about 130,000 clinical images to classify skin lesions. The performance of this software model was on par with the diagnosis results from experts.

In dentistry, machine learning has been applied in diagnosing common dentofacial diseases such as dental caries, maxillofacial cancers, alveolar bone resorption,3 and orthodontics.4 With dental caries, Lee et al.5 applied CNNs on training samples including 3,000 periapical radiograph images and achieved the area under the curve (AUC) of 84.5% for premolar and molar. Convolutional neural networks were also applied in a dataset with only 185 clinical images in training to detect the early stage of dental caries.6 Performance of this model was assessed by the result of AUC as 83.6 and 85.6% for occlusal and proximal lesions, respectively. Convolutional neural network was also combined with Bag of Visual Word in third molar complications using dental X-ray images.7 The accuracy of the experiment on a real dental dataset is 84%.

Machine learning algorithms were also applied in periapical lesion diagnosis using periapical X-ray images, panoramic images, and cone-beam computer tomography (CBCT).810 Periapical periodontitis is a common dental disease ranging from 2.8 to 12% of teeth.11,12 Periapical periodontitis is caused by necrotic pulp, traumatic, or failed endodontic treatment. The disease can develop without any symptoms even after endodontic treatment and eventually leads to tooth extraction. On CBCT, Flores et al.13 used a semi-automatic solution by combining graph-theoretic random walk segmentation, machine learning-based LDA and AdaBoost classifier to diagnose periapical lesions, and leave-one-out cross-validation method was used to test all 17 CBCT images. The accuracy of this hybrid model was 94.1%. However, the limitations of using CBCT were the high cost, higher radiation dose than panoramic or periapical X-ray. Also, the necessary equipments for CBCT are not available in every hospital and dental clinic. For panoramic radiographs, Ekert et al.14 used CNNs to diagnose periapical lesions with a training sample of 85 panoramic radiographs. The specificity was high (87%) but the sensitivity was low (65%). With periapical X-ray images, Wu et al.15 developed a program detecting periapical lesions using a sample test including 129 X-ray images, basing on classifying histograms of the images’ patches. However, this study did not evaluate the accuracy of the classification, only investigated the performance of an algorithm, K Nearest Neighbor algorithm, with different histogram similarity measures on periapical root data.

The benefits of using periapical radiograph images are low cost for patients and dentists; lower radiation dose comparing with panoramic radiograph and CBCT. As a result, dentists usually prescribe periapical radiographs for patients, especially with routine follow-up cases. Thus, software that automatically detects periapical lesions will save a lot of time for patients and dentists.

Although there were many machine learning applications in detecting periapical lesions as mentioned above, the reliability of the applications still needs further assessments. We conducted this study to determine the sensitivity and specificity of a machine learning model in periapical lesions diagnosis on teeth with and without endodontic treatment on X-ray images. In this paper, a novel model using Faster R-CNN was proposed to diagnose periapical lesions. The model analyzes the periapical X-ray images that are affordable and widely applicable. To evaluate the proposed model, a real dataset consisting of 1,130 periapical X-ray images is collected from High Technology Dental Center, School of Odonto-stomatology, Hanoi Medical University. A numerical comparison between experimental results and experts’ diagnosis was also provided.


Data Collection

The study was conducted with a cross-sectional design at the School of Dentistry, Hanoi Medical University from December 2019 to July 2020. Periapical images have been collected using the FONA X70 Intraoral X-rays machine and PSPIX Imaging Plates. Inclusion criteria included periapical X-ray images of permanents teeth and patients aged >14 years old with good sharpness. Exclusion criteria were periapical X-ray images of tooth germs or images which have distortion effects. Collecting images has been consented to by patients. Ethical approval (rhm 2019-179) was issued by the Professional Council of School of Dentistry, Hanoi Medical University.

Two blinded experienced endodontists (>5 working years) chose 1,000 periapical X-ray images including 2,601 teeth without periapical lesions and 1,102 teeth with periapical lesions for the trained model. Images extracted from the X-ray machine were saved into a folder in *.jpg format on the hospital server. Initially, these images were analyzed and marked the apical lesions by the two endodontists. Then, the machine learning system reads the images from the folder and executes the machine learning model. Other 130 periapical X-ray images including 520 teeth were estimated for diagnostic study to test the trained model with accuracy, sensitivity, and specificity indices. In this study, the reference standard was the diagnosis of the endodontists. Examples of collected images are presented in Figure 1.

Convolutional Neural Network

Convolutional neural network16 is one of the modern tools in deep learning. Convolutional neural network was used in designing intelligent systems with high accuracy or big data processing systems such as Facebook, Google, and Amazon. Convolutional neural network is also applied in solving object detection problems, especially in images. The general structure of a CNN is presented in Figure 2A.

Convolutional neural network is a set of overlap convolutional layers. In our study, a non-linear activation function, ReLU was used in CNN to activate the weights of nodes. Each layer generated abstract information for the next layers using this function. In a feedforward neural network, each output of this layer was an input of the following layer (except the last one). Feedforward neural network was named as the fully connected network or affine network. In contrast to feedforward neural networks, layers in a CNN were connected by a convolution mechanism. Following layers were the convolutional results of previous layers. This caused the local connections.

In image processing using CNN, each neural in the next layer was generated by applying a filter on a local region of a previous neural. The architecture of a typical layer in CNN is presented in Figure 2B.

Convolutional layers in CNN had parameters that were trained to self-adjust to get the most correct information without feature selection. In Figure 3, a black and white image was digitized to a 7 × 7 matrix. Each pixel in this image was an intersection of a row and a column with the value set as 1 or 0. A sliding window or kernel/filter is a 3 × 3 matrix that is used to detecting features. Convolution on each layer was calculated sequentially by sums of results from products of elements in 3 × 3 matrixes and 7 × 7 matrixes. A 5 × 5 matrix on the right side is generated after this process.

The pooling layer is another important layer in CNN. This layer was used to reduce the size of feature matrices. An example of a pooling layer is structured as in Figure 2C.

A Faster R-CNN17 algorithm was successfully applied in object detection on images. Similar to Fast R-CNN, Faster R-CNN uses a CNN to classify objects and uses another model to segment the objects in images. However, the selective search algorithm in Fast R-CNN was replaced by a deep network in Faster R-CNN. A structure of Faster R-CNN in object detection application is presented in Figure 3.

Periapical Disease Diagnosis Model

In this part, a new model in periapical disease diagnosis was introduced. The general scheme of the proposed model is presented in Flowchart 1.

In our study, the model was constructed by applying Faster R-CNN combining with experts’ knowledge. Experts have important roles in collecting data, labeling the lesion regions, and selecting images. The main steps, i.e., designing the model, are presented below.

Step 1: Parameter selecting: The parameters of Faster R-CNN were selected.

Step 2: Model training: The model was trained on 1,000 X-ray images. Periapical lesions were marked on these images.

Step 3: Model testing: The model was applied on 130 unlabeled images. The output of this step was the result of periapical lesions detection.

Figs 1A to C: Examples of periapical X-ray images in different cases: (A) Qualified image (the image meets the criteria); (B) Distortion effect; (C) Tooth germs

Figs 2A to C: (A) Network structure of a CNN in object detection; (B) Architecture of a convolution layer; (C) An example of a pooling layer

Step 4: Model evaluating: The performance of the proposed model was evaluated by accuracy, sensitivity, and specificity indices. Moreover, a comparison between sensitivity and specificity of different areas was also presented.

Fig. 3: Network architecture Faster R-CNN

Flowchart 1: Flow diagram of periapical lesions diagnosis model using Faster R-CNN

Table 1: Experimental results obtained by applying the proposed model on the real dataset
Teeth without root canal filling (%)Teeth with root canal filling (%)Total (%)
Sensitivity82.796.289.5 (128/143)
Specificity98.687.097.9 (369/377)
Accuracy96.093.495.6 (497/520)


Applying the proposed model to collected data, values of sensitivity, specificity, and accuracy are presented in Table 1. The dataset was divided into two groups including images of teeth without root canal filling (group I) and images of teeth with root canal filling (group II). The results were computed on each data group and the overall result was also summarized.

Among 130 images for the test, there were 143 teeth with periapical lesions and 377 teeth without periapical lesions (Table 1). Based on the experiments, the sensitivity, specificity, and accuracy obtained from our model are 89.5, 97.9, and 95.6%, respectively. The sensitivity of group II was lower than that of group II (82.7% compared to 96.2%), while the specificity of group I was higher (98.6 vs 87%). The accuracy of group I was higher, but the difference was not statistically significant.

To evaluate the performance of our model, the implementations on different cases have been done. The cases included maxillary anterior teeth, maxillary posterior teeth, mandibular anterior teeth, and mandibular posterior teeth. The results of the application are shown in Figure 4.

In general, our model provided a high performance in all areas. Our model had a high specificity on maxillary posterior teeth (100%) and a high sensitivity on mandibular anterior teeth (100%). The model got the best results in mandibular anterior teeth (>97% on all indices). The lowest value of accuracy was 84.8% in mandibular posterior teeth. However, there were no statistical differences in sensitivity and specificity among the four considered areas.


The program was developed to be a diagnostic aid in the community, to reduce the hospital load, reduce chair-time, assist in diagnosing in disadvantaged areas, and provide counseling for those who have not yet reached the dentist. Therefore, the program was made with the simplest possible usage process. Dentists can easily apply this technique after a single training session. The program only requires minimal computer background and knowledge, including entering patient information, importing periapical X-ray images, exporting results, and reading results. Note that the program is not a substitute for the dentist. The importance of dentists in clinical examination, diagnosis, treatment planning, and treatment is irreplaceable.

The proposed software can provide a diagnosis with a favorable accuracy even in areas that had special anatomical landmarks, such as sinus, mandibular nerve canal, and nasal cavity. Our study is the combination of a modern machine learning technique and experts’ knowledge. Experimental results show the high performance of the proposed model in different areas comparing with results from the endodontist’s diagnosis. Besides, the sensitivity of detecting periapical lesions in our study was 89.5% in which higher than the result of Ekert’s study which used CNNs on panoramic film (65%), but lower than the result of the model using CBCT images of Lee (96.1%).18 However, our study has higher specificity (97.9%) than these two studies (87 and 77%). The differences can be explained by the differences in datasets and algorithms used in these studies. Moreover, the periapical film is more applicable for dentists and patients with low cost and low radiation dose.

The accuracy of teeth with root canal filling area was 93.4% and that of teeth without root canal filling was 96%. The purpose of endodontic treatment is tooth conservation, but they are still a proportion of periapical lesions developing after endodontic treatment. Hence, all post-endodontic teeth need the following check-up to see whether they need additional treatments such as apicoectomy or even extraction.

There are still some limitations in our study. First, to diagnosis periapical lesions, a thorough clinical examination about dental caries, tooth mobility, vertical percussion, sinus tract, and X-ray is required. With the proposed software, only a periapical X-ray is provided, so the software cannot provide a diagnosis that is reliable in deciding treatments. Second, in image processing using machine learning techniques, the more images in the dataset we have, the more effectively the model works. The dataset collected for constructing the model in this study is not large enough (only 1,000 images).

Fig. 4: Comparison of sensitivity, specificity, and accuracy among different areas by applying the proposed model


In our study, AI with faster CNNs can predict apical lesions with acceptable results for teeth with root canal filling and teeth without root canal filling. Applying this method can reduce the chair time and workload for dentists. However, further studies with a larger sample are needed to increase accuracy.


Conceptualization, Vo Truong Nhu Ngoc; methodology, Vo Truong Nhu Ngoc and Do Hoang Viet; software, Tran Manh Tuan and Tran Thi Ngan; formal analysis, Do Hoang Viet, Nguyen Thu Tra, Le Kha Anh, and Dinh Quoc Minh; investigation, Nguyen Thu Tra; writing—original draft preparation, Do Hoang Viet and Tran Manh Tuan; writing—review and editing, Do Hoang Viet, Tran Manh Tuan, Tran Thi Ngan, and Nguyen Thu Tra; visualization, Nguyen Thu Tra; supervision, Vo Truong Nhu Ngoc; project administration, Vo Truong Nhu Ngoc; funding acquisition, Le Long Nghia and Hoang Kim Loan. All authors have read and agreed to the published version of the manuscript.


The authors would like to thank the supports of the major project from the Department of Science and Technology, Hanoi City, under grant number No. 01C-08/12-2018-3.


1. Polat K, Güneş S. Breast cancer diagnosis using least square support vector machine. Digital Signal Processing 2007;17(4):694–701. DOI: 10.1016/j.dsp.2006.10.008.

2. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017;542(7639):115–118. DOI: 10.1038/nature21056.

3. Hung K, Montalvao C, Tanaka R, et al. The use and performance of artificial intelligence applications in dental and maxillofacial radiology: a systematic review. Dentomaxillofac Radiol 2020;49(1):20190107. DOI: 10.1259/dmfr.20190107.

4. Kunz F, Stellzig-Eisenhauer A, Zeman F, et al. Artificial intelligence in orthodontics: Evaluation of a fully automated cephalometric analysis using a customized convolutional neural network. J Orofac Orthop 2020;81(1):52–68.

5. Lee JH, Kim DH, Jeong SN, et al. Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm. J Periodon Implant Sci 2018;48(2):114–123. DOI: 10.5051/jpis.2018.48.2.114.

6. Casalegno F, Newton T, Daher R, et al. Caries detection with near-infrared transillumination using deep learning. J Dent Res 2019;98(11):1227–1233. DOI: 10.1177/0022034519871884.

7. Ngoc VTN, Agwu AC, Son LH, et al. The combination of adaptive convolutional neural network and bag of visual words in automatic diagnosis of third molar complications on dental x-ray images. Diagnostics (Basel, Switzerland) 2020;10(4):209.

8. Carmody D, McGrath S, Dunn S, et al. Machine classification of dental images with visual search. Acade Radiol 2002;8(12):1239–1246. DOI: 10.1016/S1076-6332(03)80706-7.

9. Mol A, van der Stelt PF. Application of computer-aided image interpretation to the diagnosis of periapical bone lesions. Dento Maxillo Facial Radiol 1992;21(4):190–194. DOI: 10.1259/dmfr.21.4.1299632.

10. Okada K, Rysavy S, Flores A, et al. Noninvasive differential diagnosis of dental periapical lesions in cone-beam CT scans. Med Phys 2015;42(4):1653–1665. DOI: 10.1118/1.4914418.

11. Berlinck T, Tinoco JM, Carvalho FL, et al. Epidemiological evaluation of apical periodontitis prevalence in an urban Brazilian population. Brazil Oral Res 2015;29(1):51. DOI: 10.1590/1807-3107BOR-2015.vol29.0051.

12. Ödesjö B, Helldén L, Salonen L, et al. Prevalence of previous endodontic treatment, technical standard and occurrence of periapical lesions in a randomly selected adult, general population. Endod Dent Traumatol 1990;6(6):265–272. DOI: 10.1111/j.1600-9657.1990.tb00430.x.

13. Flores A, Rysavy S, Enciso R, et al. ed., Non-invasive differential diagnosis of dental periapical lesions in cone-beam CT. 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro; 2009: IEEE.

14. Ekert T, Krois J, Meinhold L, et al. Deep learning for the radiographic detection of apical lesions. J Endodont 2019;45(7):917–922. e5. DOI: 10.1016/j.joen.2019.03.016.

15. Wu Y, Xie F, Yang J, et al. Computer aided periapical lesion diagnosis using quantized texture analysis. Medical imaging 2012: computer-aided diagnosis. International Society for Optics and Photonics 2012;8315:(42).

16. Deperlioglu O, Mahallesi E, Gazlıgöl Y, et al. Classification of segmented phonocardiograms by convolutional neural networks. Broad Res Artific Intellig Neurosci 2019;10:5–13.

17. Mo X, Tao K, Wang Q, et al. An Efficient Approach for Polyps Detection in Endoscopic Videos Based on Faster R-CNN. 24th International Conference on Pattern Recognition 2018.

18. Lee JH, Kim DH, Jeong SN. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Dis 2020;26(1):152–158. DOI: 10.1111/odi.13223.

© The Author(s). 2021 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by-nc/4.0/), which permits unrestricted use, distribution, and non-commercial reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.