Patient First Approach to Decision-Support Systems in Medicine

From UBC Wiki

Decision Tree Approach to Decision-Support Systems in Medicine

This page advocates for a patient first approach to decision-support systems in medicine to help a patient make an informed decision that is the best for them.

Principal Author: Katherine Breen

Abstract

Decision-support systems are a tool that can minimize the risk of medical errors made when physicians or patients make decisions. Incorporating a patient's preferences into these support systems is difficult but necessary when trying the make the best decision possible for a patient. Each patient has their own values, background and medical history that shape their treatment preferences. This page suggests a decision tree based decision-support system for medical decisions. This system would be interpretable by patients, trusted by doctors and easy to implement in a hospital. Additionally, a patient could impose their own preferences on the decision tree which avoids the problem of having a model try to interpret a patient's preferences.

Builds on

This page builds on decision trees.

Related Pages

There are no related pages to this page.

Content

Introduction and Background

Decision-Support Systems and Machine Learning

Decision-Support Systems are a tool that can be used to help a patient and physician make a decision. These systems exist to help minimize the risk of medical errors since physicians are at risk to errors no matter their qualifications or experience [1]. In the context of artificial intelligence, these support systems have already been implemented to help doctors make a diagnosis, predict adverse reactions to medications or read medical images [1]. Physicians and other healthcare professionals typically make decisions in a matter of seconds or minutes based on their interaction with a patient and the information available to them [2]. Thus, physicians may have a limited amount of knowledge of the patient's medical history [2]. AI-based decision support systems leverage the increasing amount of electronic health records to aid physicians in making decisions. These systems take into account information or patterns a physician may have missed [2]. These AI-based support systems assume that there exists a function that links data on patients to a specific class (the class could be a disease, set of treatments, risk level or other health related things). It is assumed that the function can be approximated by deep learning or a more general machine learning technique [1]. So far, the effectiveness of AI-based support systems has been mixed. Some systems perform well while being tested but perform poorly in actual practice [1][3].

Example of ML-Based Decision Support Systems

An example of a recently developed AI-based decision support system is the deep learning-based assistant developed to help pathologists differentiate between two types of primary liver cancer[3]. The model had an accuracy of 0.885 when predicting the type of primary liver cancer [3]. When implemented in a clinical setting, the decision support system significantly improved the pathologists accuracy [3]. However, when the model's prediction was incorrect, the pathologist's accuracy significantly decreased [3]. This could be because the pathologists became too reliant on the diagnosis of the decision-support system [1].

Problems with AI-Based Decision Support Systems

Many decision support systems work well in practice but have not been proven to perform well in a clinical setting [1]. Thus, a primary goal of developing a decision-support system is to develop one that will actually be used by physicians and patients. For a system to be used in a clinical setting, the patient and doctor need to trust the system and ensure a patient's preference is accounted for in making the decision.

Creating trust between decision support system, and patient and doctor

There exist many AI-based support systems that are effective in practice at predicting outcomes, prioritizing treatment or helping choose the best course of action[2]. Despite their promise in practice, many of these systems have not been implemented because of trust issues. This is typically referred to as the black box problem [2]. The black box problem refers to how it is hard to understand why a machine learning model makes a decision. An example of this is if a support system tells a physician to prescribe a drug, drug A, but the physician wants to prescribe a different drug, drug B, the physician might not trust the support system enough to prescribe drug B without knowing why the support system suggests that drug [2].

One way to surmount the black box problem is to use decision tree models. Decision trees are a machine learning algorithm that produce results that are easy to interpret. The algorithm produces a tree of statements that can be followed to understand why the model came to the conclusion it did[2]. These are the most common models used in clinical practice today. While there are other models that produce better results, decision trees surmount the black box problem [4][2]. The reason why they are so widely used is due to their explainability [2]. Physicians and patients want to be informed on why a decision is being made.

Balancing the roles of patient, doctor, and decision-support system

When it comes to implementing decision-support systems, there needs to be a balance between the doctor's input, the patient's input and the decision-support systems' input [5]. All of these three agents (doctor, patient, and decision-support system) can make mistakes [1]. To minimize the risk of medical mistakes, the three agents should work together to determine a best decision. If a doctor relies too much on a decision support system, they could be influenced by the system to make the same mistakes the system made, as in the liver cancer example above [2][1]. This also affects the patient. A patient may be swayed towards a choice that is against their preferences if the doctor or support system suggests it. Ultimately the choice affects the patient, so their preferences should be strongly considered.

One other balance that needs to be considered when implementing decision-support systems is alert fatigue. Alert fatigue is the mental fatigue health providers experience when they encounter too many alerts and reminders from decision support systems [2]. These alerts can be warnings to a health provider or prompts to enter more information. Currently, alert fatigue causes physicians to override 49-96% of the current medication safety alerts [2]. Therefore, if there is too much information to enter on a patient's preferences or too many warnings, it is likely that the physician will not use the decision-support system correctly [2]. This could have consequences for the patient, because then the decision-support system is not being used properly and could suggest wrong decisions.

Difficulties with approximating patient's preferences

A recent review found that the majority patients prefer to participate in their medical decisions [5]. Therefore, it is important for a patient to work with their doctor and the decision-support system to make a decision. Every patient that enters a hospital or doctor's office seeking healthcare has different values, backgrounds and medical history. This results in each patient having different preferences that need to be considered by all three agents.

AI-based decision-support systems look to find an approximation of the function that links a patient's health records to a specific best decision [1]. These systems assume that this approximation is good enough to represent the function [1]. The assumption that a function can be used to approximate links between data on patients and specific classes may be a reason for poor performance of ML-based support systems in practice [1]. Due to the complexity of each patient's situation, the approximation might not be good enough.

One significant problem that could come from developing an AI decision-support system is that biases from training data would be reflected in the model. The AI decision-support system would be trained on previous health records. Given the socioeconomic inequities that have existed and still exist in Canadian healthcare, the training data used for a support system would most likely be biased [6]. This could lead to the support system only returning the best decisions for certain populations. This would be detrimental to the health of patients, especially those in marginalized communities.

Proposal

Given the obstacles raised in the above paragraphs, the goals of my proposed method for implementing a decision-support system are as follows:

  1. Trusted by the patient and doctor
  2. Easy to use
  3. Helps patient and doctor make an informed decision

To achieve these goals, I think a decision tree based decision support system will be the best way to advise patients and doctors.

Many AI-based decision support systems face the black box problem which leads to a lack of trust in the support system[2]. To surmount this obstacle, I suggest creating a support system that is an interpretable model, such as a decision tree. This would increase trust between the support system and the doctors and patients[2]. With a decision tree, when a decision is suggested, the patient and doctor would easily be able to understand why the decision was suggested. Additionally, seeing the steps the model took to suggest a decision could point out flaws in a doctor's or patient's logic.

Given the complexity of each patient's situation, trusting a model to learn a patient's preferences would be difficult. Using a decision tree based support system allows for a patient to impose their own preferences on the model. For example, a node in a decision tree for determining best course of treatment could give a choice of recovery time. The patient could then impose their own preferences on the model by choosing the length of recovery time that best suits their situation.

An added bonus of not having the model learn a patient's preferences is that the physician will not have to answer prompts in the decision support system about the patients' preferences. This will reduce the risk of a physician experiencing alert fatigue. With reduced alert fatigue, the physician would consider the suggestions of the system instead of overriding them, which is what happens now [1]. Additionally, I think this would increase model performance in a clinical setting because with less prompts there is less room for physician error when entering information.

Implementation

Specifically, I see this decision-support system implemented in hospitals to help make decisions where both the patient and doctor are involved (ex: considering treatment options). Based on the doctor's diagnosis, the decision support system would create a decision tree for the possible treatment options. The patient and physician could then follow the decision tree, imposing a patient's preferences at each node in the tree. The decision the decision tree suggests does not have to be the decision a patient makes, but following this process could make the patient consider factors they had not thought of. This system achieves the goals of trust, ease of use and informing the patient and doctor which I think makes a successful decision-support system.

Annotated Bibliography

  1. 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 1.11 Richard, Antoine; Mayag, Brice; Talbot, François; Tsoukias, Alexis; Meinard, Yves (12 August 2020). "What does it mean to provide decision support to a responsible and competent expert?". EURO Journal on Decision Processes.
  2. 2.00 2.01 2.02 2.03 2.04 2.05 2.06 2.07 2.08 2.09 2.10 2.11 2.12 2.13 2.14 Wasylewicz, A. T. M.; Scheeper-Hoeks, A. M. J. W. "Clinical Decision Support Systems". Fundamentals of Clinical Data Science: 153–169.
  3. 3.0 3.1 3.2 3.3 3.4 Kiani, Amirhossein; Uyumazturk, Bora; Rajpurkar, Pranav; Wang, Alex; Gao, Rebecca; Jones, Erik; Yu, Yifan; Langlotz, Curtis; Ball, Robyn (26 February 2020). "Impact of a deep learning assistant on the histopathologic classification of liver cancer". Nature.
  4. Tenório, Josceli; Hummel, Anderson Diniz; Cohrs, Frederico Molina; Sdepnanian, Vera Lucia; Pisa, Ivan Torres; Marin, Heimar de Fátima (13 September 2011). "Artificial intelligence techniques applied to the development of a decision-support system for diagnosing celiac disease". HHS Author Manuscripts.
  5. 5.0 5.1 Chewning, Betty; Bylund, Carma; Shah, Bupendra; Arora, Neeraj K.; Gueguen, Jennifer A.; Makoul, Gregory (1 January 2012). "Patient preferences for shared decisions: A systematic review". Patient Education and Couseling. 86: 9–18.
  6. Marchildon, George; Allen, Sara; Merkur, Sherry (2020). "Canada: Health System Review" (PDF). Health Systems in Transition. 22.

To Add

Put links and content here to be added. This does not need to be organized, and will not be graded as part of the page. If you find something that might be useful for a page, feel free to put it here.