CDEWorld > Courses > AI Applications in Dentistry

CE Information & Quiz

AI Applications in Dentistry

Maria L. Geisinger, DDS, MS; and Ethan S. Madison, DDS

September 2024 Issue - Expires Thursday, September 30th, 2027

Inside Dental Technology

Abstract

Artificial intelligence (AI) is increasingly being used to power a broad spectrum of technologies and devices in dentistry. Although AI offers great promise in its ability to improve care and create efficiencies, there are many ethical considerations surrounding its use. To protect patients, it is important that AI tools are only used for clinical decision-making support and not trusted to make clinical decisions for practitioners. AI should never be a replacement for the sound clinical judgement of an appropriately educated, trained, and experienced provider. This article provides background on the various subfields and approaches to AI algorithms, examines how an understanding of such systems informs the responsible use of AI-powered dental technologies, and explores some of the regulatory and legal considerations involved. Recommendations are provided to guide practitioners in evaluating AI-powered tools for implementation, and a summary of some of the various applications of AI in dentistry is included along with guidance regarding benefits and challenges.

You must be signed in to read the rest of this article.

Login Sign Up

Registration on CDEWorld is free. Sign up today!
Forgot your password? Click Here!

As a field, artificial intelligence (AI) was founded in 1956 at the Dartmouth Summer Research Project on Artificial Intelligence by John McCarthy, Marvin Minsky, Nathaniel Rochester, Claude Shannon, and others. The term AI was first introduced in the proposal for the conference, which outlined that an attempt would be made to find out how to "make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."1 This capacity is encoded in algorithms, and we can distinguish AI from other algorithms using two rules: 1) AI algorithms cannot be prescriptive and 2) AI algorithms should be able to solve problems without prior knowledge of all possible inputs and predetermined handling for each.

Subfields of AI

Because AI is such a broad field with origins dating back to the 1950s, there are now many different subfields of AI that use varying techniques, or algorithms, to achieve a variety of goals. The two most discussed subfields for applications within dentistry are "machine learning" and "deep learning," which is a subset of machine learning (Figure 1).2

Machine learning, which originated in the 1980s, is distinguished by the capacity to learn without explicit programming or the capacity to generalize or apply a solution from one problem to another similar problem. There are three main categories of machine learning techniques that correspond to the paradigms by which the learning occurs: supervised learning, in which the machine is given training data (eg, example inputs and desired outputs) with the goal of generalizing it to other outputs; unsupervised learning, in which the machine is not given any labeled data (it must instead find patterns in its input); and reinforcement learning, in which the machine interacts with a dynamic environment and performs certain tasks while being provided feedback. In dentistry, supervised learning may be used to train a caries classifier to assess radiographic images, whereas unsupervised learning may be used to identify risk factors for oral cancer from patient data. Reinforcement learning, which is used more regularly in other fields, such as gaming, has few applications in clinical dentistry as of yet.

Deep learning refers to a subset of machine learning techniques that use artificial neural networks with multiple layers-hence the word "deep"-to improve accuracy and the capacity for learning.3 Such artificial neural networks are algorithms that are based on the patterns of connection among neurons found in animal brains. In an in vivo neural network, any connection between two neurons has a strength. The greater the strength, the more potent the effect is from that connection, with the inverse of this relationship also being true. This relationship between neurons is mimicked in the artificial neural networks used in deep learning so that training and/or pattern recognition leads to the relative strengthening or "weighting" of some neural connections and the relative weakening of others. Deep learning can be applied to any of the three aforementioned machine learning techniques, and it could be especially useful in dentistry in the analysis of radiographic and other images.

Another concept that is important to understand in computer science and AI theory is that these algorithms are often opaque in their implementation. That is, as outside observers, we often do not know how they are implemented or how they perform their internal logic; we only know the inputs and outputs. In fact, for such algorithms, even the creators cannot understand why or how a given input produces a specific output. AI systems that use algorithms that are unclear or hidden from the user are referred to as "black box" systems, and due to their unintelligible nature, they can introduce risks in the context of healthcare applications. To address such risks, explainable AI (XAI) systems are being developed. XAI is a subfield of AI that is concerned with finding solutions to make the outputs of AI algorithms transparent and therefore more understandable, trustworthy, and safe.4 XAI algorithms expose their internal representations and reasoning strategies, allowing them to be understood and optimized for performance.5 Although XAI is a growing and essential field of AI research, its use is uncommon in commercial and clinical AI applications due to proprietary and financial pressures that advantage algorithmic opacity for companies and organizations.

Responsible Use of AI in Healthcare

As oral healthcare practitioners, we have an abundance of different tools at our disposal to affect our patients' health, and many of these leverage AI. Although AI-powered tools can save human labor and time, they are not always the most appropriate or best options. Our trustworthiness and legal standing stems from our ability to explain our decisions, but with many of the current opaque algorithms, such explanations are not readily available. Therefore, if a clinical decision is solely based upon an opaque computer algorithm, it becomes difficult to explain and defend to patients and colleagues and in medicolegal settings. Outputs from AI-powered systems may be prone to errors because many AI algorithms contain biases from their designers, training datasets, or training supervisors. Such biases can be compounded through the process of supervised or unsupervised learning. Furthermore, in addition to the common issues with opacity and bias, the performance of AI may be negatively affected by the presence of confounding features in training data.

When using AI, as with any tool, the responsibility ultimately falls on the practitioner-not the tool itself. As such, AI tools should be considered supplemental aids to best practices in dentistry rather than replacements for provider involvement. For example, a denture design soley created with AI should not be relied upon as the final design without review. Technicians must use their knowledge and expertise to check the AI's work and ensure that all design elements align with the clinical and medical considerations for the specific case. It is also important that all settings and values align with best design practices for the particular prosthesis. Additional problems associated with using AI tools as primary decision-makers include deskilling (ie, the loss of technical and nontechnical skills as a result of outsourcing tasks to software) and automation bias (ie, passive acceptance of AI outputs resulting in overreliance).6 Presently, AI-powered tools regularly make the news for "hallucinating" or "confabulating," which are merely ways to say that they are lying or making up facts. If a patient is harmed because a practitioner follows an incorrect AI-generated diagnosis, treatment plan, or prosthetic design, the practitioner would be the culpable party as well as possibly the developer of the AI tool. An infamous example of automation bias is highlighted by the Therac-25 accidents of 1985 to 1987, in which patients were killed by radiation overdoses resulting from incorrectly programmed computerized radiation delivery systems; both the manufacturer and the operators of the machines were held responsible for the accidents that occurred.7

Instead of allowing AI tools to lead decision-making processes, they should be used to provide second opinions to validate or question the decisions made by adequately trained healthcare professionals. Providers should always seek to understand any conclusion made by a machine in order to ensure its appropriateness. This is precisely where XAI can be so useful. In order for AI-powered dental technologies to be trustworthy and suitable for clinical uses, they must be able to explain how and why they come to their decisions regarding diagnoses or treatments. As a clinical resource, XAI can help clinicians save time or verify their understanding of a topic. Similarly, XAI can aid in the education of dentists and other oral healthcare providers by helping them identify patterns and explain their reasoning for making certain decisions. And because XAI also helps its users understand the biases and confounding factors of the system, it enables more informed decision-making.5 In a 2019 article proposing a governance model for the application of AI in healthcare, Reddy and colleagues observed that patients' "trust in clinicians encompasses trust in the clinical tools they choose to use, and in the selection of those tools, including AI-based tools."8

Regulatory and Legal Considerations

In the United States, healthcare products that incorporate AI are regulated by the US Food and Drug Administration (FDA) and must be approved before clinical use. If the software uses protected health information, then it must also be in compliance with the Health Insurance Portability and Accountability Act (HIPAA) privacy and security rules. Although these requirements could present barriers for the adoption of AI in day-to-day clinical practice, they could be overcome through the use of de-identified/anonymized information and/or the incorporation of AI within encrypted and protected electronic health record software.6The United States also lacks an omnibus privacy protection law; therefore, regulatory privacy protections are region- or sector-specific and often rely on corporations and other entities to self-monitor to protect consumers' interests.9 This means that vendors or partners of AI-powered software applications may or may not protect patient data sufficiently enough to meet the standards required in healthcare settings.

The responsibility is on providers and their organizations to make good decisions regarding the storage and transfer of patient data. For example, a common misconception is that storing data on cloud-based servers inherently provides enhanced security when compared with storing it locally. Healthcare providers should keep in mind that "the cloud" refers to computer servers managed by outside entities and that cloud storage does not provide a guarantee of security, privacy, trustworthiness, or even reliability. Accordingly, AI system data, as well as electronic health records, digital image files, and digital backups, should not be stored on a cloud platform that is not compliant with the applicable consumer data protection laws (eg, HIPAA, General Data Protection Regulation, etc). Practitioners should also consider that the current laws and regulations may not be sufficient. There is often a lag period between the emergence of new technologies and appropriate regulation, and in some cases, even up-to-date regulations may fail to provide the degree of protection that healthcare providers are ethically bound to provide for their patients.

Grande and colleagues identified several medicolegal and ethical risk factors associated with the digital processing of personal data, including invisibility, which refers to the fact that people are unaware of their data being collected and denied the opportunity to opt out; inaccuracy, which refers to inaccurate data or the inaccurate interpretation of data by humans or machines; immortality, which refers to the fact that data is collected but rarely destroyed, increasing the risk of misuse; marketability, which refers to how patient data may be used for profit by corporations without benefiting the individuals; and identifiability, which refers to how reidentification is often trivial, even after anonymization.9 Furthermore, they suggest that there is no distinction between health and non-health data regarding risk factors for ethical issues.6 Because of these risk factors, private patient data can be used for nefarious purposes by hackers or even organizations that legally receive data that are inappropriately transferred. Therefore, as healthcare practitioners, we should be extremely cautious about transferring any patient data in ways that are beyond our control to prevent violating the privacy of our patients and thereby harming them.

AI systems possess an intrinsic risk for error. Errors associated with AI systems include those related to software or hardware faults, deficiencies in training, inappropriate design choices, or use outside of the intended context. The European Commission considers all AI products that are designed for healthcare to be "high-risk" and mandates that such products meet an extensive list of requirements prior to legal approval.6 Although regulations differ throughout the global community, the risks identified by the European Commission apply worldwide, and individual practitioners should adhere to best practices. Presently, AI is fundamentally incapable of thinking, learning, or understanding in the same way that humans do. As a result, we should always seek to verify the output of AI systems before applying it to patient care. We can embrace these technologies to provide decision-making support, but only if we ensure that any given answer is accurate and appropriate.

Another concern related to the currently available AI tools and ongoing AI research in dentistry is the lack of professional and academic discussion surrounding its ethical use. Ethical questioning regarding dental AI technologies, as well as discussion of the biases inherent in the software systems, is largely absent from the scientific literature. As mentioned earlier, these biases can come from a variety of sources and may affect the appropriateness of the AI's decisions for particular spectrums of patients seen by practitioners. Access inequality is another problem that is rarely discussed, and it may result in differences in the quality of the care administered in different settings. And finally, many of the studies of the performance of AI systems in dentistry are only validated internally due to the proprietary nature of the specific algorithms. If not adequately addressed, these issues could negatively impact the confidence of practitioners and patients in the current and future applications of AI.10

With informed consent, there is an open question regarding whether or not patients can consent to the use of AI software. Both patients and practitioners may lack the technical literacy or topical knowledge to fully understand the implications of using AI software,6 even if education is provided. Moreover, if the AI tool in question does not leverage XAI technologies, it is an unintelligible black box, and many argue that informed consent cannot be given to use such tools, even when patients possess technical literacy and a knowledge of AI systems.

In summary, the responsible use of AI tools in clinical dental practice requires careful analysis, an understanding of the risk factors and regulations, proper stewardship of any patient data, and full ownership by practitioners of any decisions made.

Evaluating AI Systems for Implementation

Once dental professionals understand how to responsibly use AI-powered tools, the next issue is identifying ones that they can safely use in their practice or laboratory. Clinicians and technicians should consider tools that help them complete tasks more efficiently rather than removing them from the performance of the task. In addition, any tools selected should provide an output that dental professionals can verify is correct using their own technical or clinical knowledge and judgment. For example, a tool used for digital denture design might provide a suggested initial design and prompt design approval from the technician. Such a tool may reduce time required initially setting up the workspace for a specific type of case, however it still requires the technician to examine each setting and adjust or customize it based on their own experience, design expertise, and knowledge of the case, regardless of how reliable the AI tool may appear to be. When possible, tools that use XAI software should be selected to provide the opportunity for logical analysis and critique of the outputs.

Another recommendation is to seek out AI-powered tools that use open-source software rather than proprietary software. With an open-source software license, end-users can verify that the software behaves as promised, but with proprietary software, which is closed-source and does not permit verification of the path of input to output, end users must rely on manufacturers and sponsored publications for verification. Although the frequency of violations cannot be known, they are possible. For example, a software package whose vendor claims it does not transmit patient data could indeed do so without knowledge of the end user if its source is not available for inspection. Open-source software can also be tailored to meet the needs of individuals or communities and can be provably secure or private, whereas closed-source software cannot. In addition, open-source licenses do not restrict the business model of the software's authors, so it can be provided without cost, be sold for a fee, require a subscription, or be distributed in any other manner.

When possible, practitioners should give preference to AI tools with software that runs locally; that is, on machines that they control. With local software, practitioners can more securely limit with whom the data is shared. On the other hand, when using a cloud service, the data could potentially be shared arbitrarily, outside of the practitioner's control. The selection of tools with open-source software that also runs locally virtually eliminates the risk of unwanted data collection by other entities.

Finally, when considering AI tools that incorporate supervised learning, practitioners should preference those with software with published training datasets. The ability to view and understand the training data of an AI tool can enable practitioners to identify flaws or weaknesses in the training, including potential bias; use the tool in appropriate contexts; and provide additional training as needed.

Applications of AI in Dentistry

Although there continue to be issues regarding the implementation, ethical use, and clinical applicability of AI-powered tools in dentistry, it is becoming increasingly apparent that AI will be central to many aspects of our practices going forward. Given the potential for AI in diverse aspects of healthcare, including natural language processing, pattern recognition in diagnosis, analysis of risk factors for dental disease, and more, it would behoove practitioners to be aware of the current landscape as well as possible future applications for AI in dentistry. Many dental software packages and services are now incorporating AI technologies in a variety of areas of dental practice to become AI-enhanced dental software suites.

Radiologic Pathosis Detection

One common application of AI in dentistry is radiographic pathosis detection. Computer vision, which allows for excellent pattern matching capabilities by combining deep learning with large training image sets, enables AI-powered radiology software to identify and mark suspected pathoses for further review and diagnosis by a clinician. Such software packages often report a probability or level of confidence in their findings. Software applications that are currently on the market and regulated for use in the United States and European Union support the use of AI in the radiographic detection of caries, alveolar bone loss, calculus, and periapical radiolucencies.11,12 One advantage of using AI in this capacity is its ability to detect small changes in radiopacity in order to identify lesions at early stages when caries remineralization and/or minimally invasive treatments may be applicable. Such early signs of demineralization or minimal progression of alveolar bone loss may not otherwise be reliably detected by all clinicians. Some software suites recommend treatment plans based on the radiographic findings or offer features such as image segmentation, which refers to the identification and separation of different tissues from one another. One challenge that AI has in disease identification is that it may be difficult for it to discern between active and progressive disease, particularly with only one or a low number of data points. For example, a patient who has been successfully treated for periodontitis but has a reduced periodontium could be misidentified by AI as having active periodontitis based on its evaluation of the alveolar bone loss on radiographs in the absence of other information. Due to these and other challenges, dental healthcare providers should continue to use clinical judgment to make diagnoses and treatment plans. AI can be a powerful tool when used to support clinical decision-making or to better identify and explain problems and solutions to patients, but it should never be used as a replacement for the training and experience of clinicians (Figure 2).

Diagnosing Periodontal Disease

Since 1999, researchers have attempted to use AI to aid in the diagnosis of periodontal disease. In the infancy of computer vision, Juan and colleagues published an article describing the automated recording of periodontal probing depths via a camera and computer vision.13 In the years since, more sophisticated technologies have enabled AI-based radiographic periodontal assessments and prognoses.10,13,14 Some of these software services are reported to demonstrate better accuracy and reliability in diagnosing periodontal disease than trained dentists because of the vast amount of training information that they possess and the subtleties that they can detect.15 In addition, AI has been used in the at-home monitoring of patients to provide counselling and as an educational and motivational tool for oral hygiene.16 Due to the chronicity of periodontitis, assessment tools that combine radiographic, clinical, and medical factors to develop risk assessments and inform patient care may be particularly valuable when addressing the disease at a patient level. Practitioners who use such tools need to understand that the quality of the data input, such as the accuracy of periodontal probing depths and calculated clinical attachment levels, is critically important to their ability to detect sites where disease progression may be occurring and to recommend intervention. As such, some of the oldest technologies of the group-the automated recording of periodontal probe measurements and periodontal probe calibration to ensure that the most accurate data is used in clinical decision-making-may be the most useful.

Orthodontic Treatment Planning

Since the advent of clear aligners, orthodontics has been strongly tied with AI for treatment planning and case management. Most therapies with clear aligners begin with an intraoral scan, at which point a computer system makes recommendations regarding tooth movement and estimates the treatment time and number of trays needed.11 Although not all offices perform orthodontics, these technologies are used daily by many orthodontists and are among the AI-powered dental products with the most adoption. Orthodontics treatment planning software may also use neural networks for the identification of cephalometric points, tooth segmentation, diagnosis, treatment plan previews, and ongoing reevaluation of orthodontic tooth movement during active treatment.14

Restorative and Surgical Treatment Planning

AI is also being used to inform restorative and surgical treatment planning. For example, when preparing teeth for crowns, AI technology can be used to identify restorative margins on intraoral scans, and then provide feedback on the crown preparation. It can even be used as an assistive tool during the design of the restoration by the laboratory and allow for increased ease of communication between the clinician and laboratory technician during review of restoration design. Clinicians and technicians who use AI to visualize their work and receive feedback are given additional opportunities to improve their patient care with short feedback loops.11 Such feedback can also aid in training student dentists and enable dental healthcare professionals to continually improve their skills over time.

In the realm of implantology, AI is increasingly being used for surgical planning, including for the identification of anatomic structures (eg, the inferior alveolar nerve canal), tooth segmentation, the merging of cone-beam computed tomography scans and intraoral scans, restoration design, and the generation of surgical plan proposals.11,17 Such AI plans can then be evaluated by surgical and restorative practitioners to ensure their appropriateness, and static and/or dynamic guides can then be used to increase the accuracy of dental implant placement.18The integration of AI into the digital workflow greatly reduces the time that clinicians need to spend planning implant cases.

Conclusion

AI technologies continue to emerge in dentistry across a variety of disciplines, and their use may improve patient outcomes and provider efficiency. However, many of these nascent AI technologies are currently still quite limited, and any dental healthcare provider who wishes to leverage these technologies must understand the limitations, as well as the risks, associated with their use. Practitioners should seek to minimize these risks by practicing the concept of "never trust, always verify." Furthermore, they should use AI tools to accelerate workflows rather than to replace clinical judgment and expertise, and when possible, use explainable AI to mitigate the black box effect, use open-source and local software, and examine the training data of any supervised learning tool. As the development and integration of AI continues, the profession of dentistry must find ways to leverage these technologies to facilitate optimal patient care without losing the ability to grasp the overall picture and deliver true personalized care for our patients.

References

1. McCarthy J, Minsky ML, Rochester N, Shannon CE. A proposal for the Dartmouth summer research project on artificial intelligence. Stanford website. https://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html. Published August 31, 1955. Accessed May 22, 2024.

2. Lollixzc. English: machine learning as a subset of artificial intelligence. Wikimedia Commons. https://commons.wikimedia.org/wiki/File:AI_hierarchy.svg Published August 19, 2022. Accessed May 22, 2024.

3. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521 (7553):436-444.

4. Holzinger A, Goebel R, Fong R, et al. xxAI - Beyond explainable artificial intelligence. In: Holzinger A, Goebel R, Fong R, et al, eds. xxAI - Beyond explainable AI: International Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers. Springer International Publishing; 2022:3-10.

5. Ma J, Schneider L, Lapuschkin S, et al. Towards trustworthy AI in dentistry. J Dent Res. 2022;101(11):1263-1268.

6. Oliva A, Grassi S, Vetrugno G, et al. Management of medico-legal risks in digital health era: a scoping review. Front Med (Lausanne). 2022;8:821756.

7. Leveson NG. The Therac-25: 30 years later. Computer. 2017;50(11):8-11.

8. Reddy S, Allan S, Coghlan S, Cooper P. A governance model for the application of AI in health care. J Am Med Inform Assoc. 2020;27(3):491-497.

9. Grande D, Luna Marti X, Feuerstein-Simon R, et al. Health policy and privacy challenges associated with digital technology. JAMA Netw Open. 2020;3(7):e208285.

10. Mörch CM, Atsu S, Cai W, et al. Artificial intelligence and ethics in dentistry: a scoping review. J Dent Res. 2021;100(13):1452-1460.

11. Chen YW, Stanley K, Att W. Artificial intelligence in dentistry: current applications and future perspectives. Quintessence Int. 2020;51(3):248-257.

12. Kabir T, Lee CT, Chen L, et al. A comprehensive artificial intelligence framework for dental diagnosis and charting. BMC Oral Health. 2022;22(1):480.

13. Juan MC, Alcañiz M, Monserrat C, et al. Computer-aided periodontal disease diagnosis using computer vision. Comput Med Imaging Graph. 1999;23(4):209-217.

14. Monill-González A, Rovira-Calatayud L, d'Oliveira NG, Ustrell-Torrent JM. Artificial intelligence in orthodontics: where are we now? A scoping review. Orthod Craniofac Res. 2021;24(Suppl 2):6-15.

15. Ossowska A, Kusiak A, Świetlik D. Artificial intelligence in dentistry-narrative review. Int J Environ Res Public Health. 2022;19(6):3449.

16. Shen KL, Huang CL, Lin YC, et al. Effects of artificial intelligence-assisted dental monitoring intervention in patients with periodontitis: A randomized controlled trial. J Clin Periodontol. 2022;49(10):988-998.

17. Mohammad-Rahimi H, Motamedian SR, Pirayesh Z, et al. Deep learning in periodontology and oral implantology: a scoping review. J Periodontal Res. 2022;57(5):942-951.

18. Ku JK, Lee J, Lee HJ, et al. Accuracy of dental implant placement with computer-guided surgery: a retrospective cohort study. BMC Oral Health. 2022;22(1):8.

Hierarchy of artificial intelligence, machine learning, and deep learning.

Figure 1

Benefits of the responsible use of AI in dentistry.

Figure 2

Take the Accredited CE Quiz:

CREDITS: 1 SI
COST: $8.00
PROVIDER: AEGIS Publications, LLC
SOURCE: Inside Dental Technology | September 2024

Learning Objectives:

  • Define artificial intelligence and identify some of its subfields.
  • Describe some of the regulatory and legal considerations surrounding the use of AI-powered tools in healthcare.
  • Discuss the responsible use of AI in dentistry.
  • Summarize the considerations involved in evaluating AI-powered tools for implementation and identify some of the primary applications of AI in dentistry.

Disclosures:

The author reports no conflicts of interest associated with this work.

Queries for the author may be directed to justin.romano@broadcastmed.com.