
Research
The TEAM Project aims to explore the use of artificial intelligence (AI) in dentistry and maxillofacial surgery, focusing on developing tools for pathology screening, clinical decision support, and surgical planning.
By integrating AI into these critical areas, the project seeks to enhance diagnostic accuracy, improve treatment outcomes, and streamline surgical procedures.
Our research has the potential to support clinicians in making more informed decisions and to streamline surgical procedures, addressing some of the existing challenges in the field.

Publications
An Examination of Temporomandibular Joint Disc Displacement through Magnetic Resonance Imaging by Integrating Artificial Intelligence: Preliminary Findings
Background and Objectives: This research was aimed at constructing a complete automated temporomandibular joint disc position identification system that could assist with magnetic resonance imaging disc displacement diagnosis on oblique sagittal and oblique coronal images.
Materials and Methods: The study included fifty subjects with magnetic resonance imaging scans of the temporomandibular joint. Oblique sagittal and coronal sections of the magnetic resonance imaging scans were analyzed. Investigations were performed on the right and left coronal images with a closed mouth, as well as right and left sagittal images with closed and open mouths. Three hundred sagittal and coronal images were employed to train the artificial intelligence algorithm.
Results: The accuracy ratio of the completely computerized articular disc identification method was 81%.
Conclusions: An automated and accurate evaluation of temporomandibular joint disc position was developed by using both oblique sagittal and oblique coronal magnetic resonance imaging images.
Published in
Almășan O, Mureșanu S, Hedeșiu P, Cotor A, Băciuț M, Roman R, TEAM Project Group. An Examination of Temporomandibular Joint Disc Displacement through Magnetic Resonance Imaging by Integrating Artificial Intelligence: Preliminary Findings. Medicina. 2024; 60(9):1396. https://doi.org/10.3390/medicina60091396
Teeth segmentation and carious lesions segmentation in panoramic X-ray images using CariSeg, a networks’ ensemble
Background: Dental cavities are common oral diseases that can lead to pain, discomfort, and eventually, tooth loss. Early detection and treatment of cavities can prevent these negative consequences. We propose CariSeg, an intelligent system composed of four neural networks that result in the detection of cavities in dental X-rays with 99.42% accuracy.
Method: The first model of CariSeg, trained using the U-Net architecture, segments the area of interest, the teeth, and crops the radiograph around it. The next component segments the carious lesions and it is an ensemble composed of three architectures: U-Net, Feature Pyramid Network, and DeeplabV3. For tooth identification two merged datasets were used: The Tufts Dental Database consisting of 1000 panoramic radiography images and another dataset of 116 anonymized panoramic X-rays, taken at Noor Medical Imaging Center, Qom. For carious lesion segmentation, a dataset consisting of 150 panoramic X-ray images was acquired from the Department of Oral and Maxillofacial Surgery and Radiology, Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca.
Results: The experiments demonstrate that our approach results in 99.42% accuracy and a mean 68.2% Dice coefficient. Conclusions: AI helps in detecting carious lesions by analyzing dental X-rays and identifying cavities that might be missed by human observers, leading to earlier detection and treatment of cavities and resulting in better oral health outcomes.
Published in
Mărginean AC, Mureşanu S, Hedeşiu M, Dioşan L. Teeth segmentation and carious lesions segmentation in panoramic X-ray images using CariSeg, a networks’ ensemble. Heliyon. 2024 May 10;10(10):e30836. doi: 10.1016/j.heliyon.2024.e30836. PMID: 38803980; PMCID: PMC11128823.
Short-term aesthetic rehabilitation with 3d printed snap-on smile devices – literature review and a case report
Aim of the study: The main objective of this paper is to perform an up-to-date literature review of the application and implications of using Snap-On Smile devices for short-term aesthetic rehabilitation, as well as to showcase the 3D printing workflow in manufacturing these devices through a case study.
Materials and methods: The present systematic study was conducted following the PRISMA-P structure of a systematic review, by using three electronic databases (PubMed, Google Scholar, and EMBASE) to perform a literature search from 2000 to 2024 using the following MESH terms: snap on smile, aesthetic, removable, device. After discarding duplicates, out of 81 results, a total of 6 articles were eligible and included in the review.
Results: Constructed from durable materials like crystallized acetyl resin or PMMA, the Snap-On Smile offers up to five years of use. Manufactured through advanced techniques such as injection moulding, CAD/CAM milling, or 3D printing, it provides an aesthetic solution without necessitating tooth preparation. The case study of a 59-year-old female illustrates its efficacy in providing immediate aesthetic enhancement for a significant social event.
Conclusions: The Snap-On Smile has emerged as a significant advancement in aesthetic dentistry, balancing the desire for improved dental aesthetics with the need to preserve natural tooth structures. Its ease of application and non-invasive nature make it a suitable option for many patients, though its limitations necessitate careful consideration in complex cases.
Published in
Burde AV, Frățilă C, Varvară EB, Varvară AV, Short-term aesthetic rehabilitation with 3d printed snap-on smile devices – literature review and a case report, Romanian Journal of Oral Rehabilitation, Vol. 16, No.2 April-June 2024, DOI: 10.6261/RJOR.2024.2.16.58
Automating Dental Condition Detection on Panoramic Radiographs: Challenges, Pitfalls, and OpportunitiesAutomating Dental Condition Detection on Panoramic Radiographs: Challenges, Pitfalls, and Opportunities
Background/Objectives: The integration of AI into dentistry holds promise for improving diagnostic workflows, particularly in the detection of dental pathologies and pre-radiotherapy screening for head and neck cancer patients. This study aimed to develop and validate an AI model for detecting various dental conditions, with a focus on identifying teeth at risk prior to radiotherapy.
Methods: A YOLOv8 model was trained on a dataset of 1628 annotated panoramic radiographs and externally validated on 180 radiographs from multiple centers. The model was designed to detect a variety of dental conditions, including periapical lesions, impacted teeth, root fragments, prosthetic restorations, and orthodontic devices.
Results: The model showed strong performance in detecting implants, endodontic treatments, and surgical devices, with precision and recall values exceeding 0.8 for several conditions. However, performance declined during external validation, highlighting the need for improvements in generalizability.
Conclusions: YOLOv8 demonstrated robust detection capabilities for several dental conditions, especially in training data. However, further refinement is needed to enhance generalizability in external datasets and improve performance for conditions like periapical lesions and bone loss.
Published in
Mureșanu S, Hedeșiu M, Iacob L, Eftimie R, Olariu E, Dinu C, Jacobs R, On Behalf Of Team Project Group. Automating Dental Condition Detection on Panoramic Radiographs: Challenges, Pitfalls, and Opportunities. Diagnostics (Basel). 2024 Oct 21;14(20):2336. doi: 10.3390/diagnostics14202336. PMID: 39451659; PMCID: PMC11507083.
A Vision-Guided Robotic System for Safe Dental Implant Surgery
Background: Recent advancements in dental implantology have significantly improved outcomes, with success rates of 90-95% over a 10-year period. Key improvements include enhanced preplanning processes, such as precise implant positioning, model selection, and optimal insertion depth. However, challenges remain, particularly in achieving correct spatial positioning and alignment of implants for optimal occlusion. These challenges are pronounced in patients with reduced bone substance or complex anthropometric features, where even minor misalignments can result in complications or defects.
Methods: This paper introduces a vision-guided robotic system designed to improve spatial positioning accuracy during dental implant surgery. The system incorporates advanced force-feedback control to regulate the pressure applied to bone, minimizing the risk of bone damage. A preoperative CBCT scan, combined with real-time images from a robot-mounted camera, guides implant positioning. A personalized marker holder guide, developed from the initial CBCT scan, is used for patient-robot calibration. The robot-mounted camera provides continuous visual feedback of the oral cavity during surgery, enabling precise registration of the patient with the robotic system.
Results: Initial experiments were conducted on a 3D-printed mandible using a personalized marker holder. Following successful patient-robot registration, the robotic system autonomously performed implant drilling. To evaluate the accuracy of the robotic-assisted procedure, further tests were conducted on 40 identical molds, followed by measurements of implant positioning. The results demonstrated improved positioning accuracy compared to the manual procedure.
Conclusions: The vision-guided robotic system significantly enhances the spatial accuracy of dental implants compared to traditional manual methods. By integrating advanced force-feedback control and real-time visual guidance, the system addresses key challenges in implant positioning, particularly for patients with complex anatomical structures. These findings suggest that robotic-assisted implant surgery could offer a safer and more precise alternative to manual procedures, reducing the risk of implant misalignment and associated complications.
Published in
Pisla D, Bulbucan V, Hedesiu M, Vaida C, Zima I, Mocan R, Tucan P, Dinu C, Pisla D, Team Project Group. A Vision-Guided Robotic System for Safe Dental Implant Surgery. J Clin Med. 2024 Oct 23;13(21):6326. doi: 10.3390/jcm13216326. PMID: 39518466; PMCID: PMC11546886.
Real-Time Motion Compensation for Dynamic Dental Implant Surgery
Background: Accurate and stable instrument positioning is critical in dental implant procedures, particularly in anatomically constrained regions. Conventional navigation systems assume a static patient head, limiting adaptability in dynamic surgical conditions. This study proposes and validates a real-time motion compensation framework that integrates optical motion tracking with a collaborative robot to maintain tool alignment despite patient head movement.
Methods: A six-camera OptiTrack Prime 13 system tracked rigid markers affixed to a 3D-printed human head model. Real-time head pose data were streamed to a Kuka LBR iiwa robot, which guided the implant handpiece to maintain alignment with a predefined target. Motion compensation was achieved through inverse trajectory computation and second-order Butterworth filtering to approximate realistic robotic response. Controlled experiments were performed using the MAiRA Pro M robot to impose precise motion patterns, including pure rotations (±30° at 10–40°/s), pure translations (±50 mm at 5–30 mm/s), and combined sinusoidal motions. Each motion profile was repeated ten times to evaluate intra-trial repeatability and dynamic response.
Results: The system achieved consistent pose tracking errors below 0.2 mm, tool center point (TCP) deviations under 1.5 mm across all motion domains, and an average latency of ~25 ms. Overshoot remained minimal, with effective damping during motion reversal phases. The robot demonstrated stable and repeatable compensation behavior across all experimental conditions.
Conclusions: The proposed framework provides reliable real-time motion compensation for dental implant procedures, maintaining high positional accuracy and stability in the presence of head movement. These results support its potential for enhancing surgical safety and precision in dynamic clinical environments.