e-ISSN:2320-1215 p-ISSN: 2322-0112

All submissions of the EM system will be redirected to Online Manuscript Submission System. Authors are requested to submit articles directly to Online Manuscript Submission System of respective journal.

Override Rate of Drug-Drug Interaction Alerts in Clinical Decision Support Systems: A Systematic Review and Meta-Analysis

Mariano Felisberto1,2, Geovana dos Santos Lima1,3, Ianka Cristina Celuppi1,3, Miliane dos Santos Fantonelli1, Wagner Luiz Zanotto1, Julia Meller Dias de Oliveira1,4, Eduarda Talita Bramorski Mohr1,2, Ranieri Alves dos Santos1, Daniel Henrique Scandolara1, Celio Luiz Cunha1, Jades Fernando Hammes1, Julia Salvan da Rosa2, Izabel Galhardo Demarchi2, Raul Sidnei Wazlawick1, Eduardo Monguilhott Dalmarco1,2*

1Department of Biotechnology, Federal University of Santa Catarina, Florianopolis, Brazil

2Department of Clinical Analysis, Federal University of Santa Catarina, Florianopolis, Brazil

3Department of Nursing, Federal University of Santa Catarina, Florianopolis, Brazil

4Department of Dentistry, Federal University of Santa Catarina, Florianopolis, Brazil

*Corresponding Author:
EduardoMonguilhott Dalmarco
Department of Clinical Analysis,
Federal University of Santa Catarina,
Florianopolis,
Brazil;
Email: edalmarco@gmail.com

Received: 07-Jul-2023, Manuscript No. JPPS-23-105188; Editor assigned: 10-Jul-2023, Pre QC No. JPPS-23-105188 (PQ); Reviewed: 24-Jul-2023, QC No. JPPS-23-105188; Revised: 05-Sep-2023, Manuscript No. JPPS-23-105188 (R); Published: 12-Sep-2023, DOI: 10.4172/2320-7949.12.5.002

Citation: Dalmarco EM, et al. Override Rate of Drug-Drug Interaction Alerts in Clinical Decision Support Systems: A Systematic Review and Meta-Analysis. RRJ Pharm Pharm Sci. 2023;12:002.

Copyright: © 2023 Dalmarco EM, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution and reproduction in any medium, provided the original author and source are credited.

Visit for more related articles at Research & Reviews in Pharmacy and Pharmaceutical Sciences

Abstract

Primary studies have demonstrated that despite being useful, most of the Drug-Drug Interaction (DDI) alerts generated by clinical decision support systems are overridden by prescribers. To provide more information about this issue, we conducted a systematic review and meta-analysis on the prevalence of DDI alerts generated by CDSS and alert overrides by physicians. The search strategy was implemented by applying the terms and MeSH headings and conducted in the MEDLINE/PubMed, EMBASE, Web of Science, Scopus, LILACS, and Google Scholar databases. Blinded reviewers screened 1873 records and 86 full studies, and 16 articles were included for analysis. The overall prevalence of alert generated by CDSS was 13% (CI 95% 5%–24%, p-value <0.0001, I2=100%), and the overall prevalence of alert override by physicians was 90% (CI 95% 85%–95%, p-value <0.0001, I2=100%). This systematic review and meta-analysis presents a high rate of alert overrides, even after CDSS adjustments that significantly reduced the number of alerts. After analyzing the articles included in this review, it was clear that the CDSS alerts physicians about potential DDI should be developed with a focus on the user experience, thus increasing their confidence and satisfaction, which may increase patient clinical safety.

Keywords

Computerized physician order entry; Clinical decision support system; Drug-drug interactions; Medication safety; User experience; Systematic review

Introduction

Medication prescription errors are quite prevalent worldwide and an important threat to patient safety. Although the most common results are only mild adverse effects, some cases significantly increase the risk of death. In this context, harmful Drug-Drug Interactions (DDIs), which can occur when the effects of one drug are influenced by the effects of another, are the leading cause of this risk. Research has already proven this and indicated that over half of adverse drug effects are directly related to prescription medication errors [1].

To reduce these risks, healthcare systems around the world are developing and implementing Electronic Health Records (EHR) with Clinical Decision Support Systems (CDSS) that warn prescribers of potential DDIs (pDDIs), thus protecting patients from adverse drug events [2]. Potential DDIs can be predicted from knowledge about the pharmacological properties of the drugs prescribed. At least 2500 drug pairs can potentially result in a DDI, although not all have relevant clinical outcomes [3].

Despite the concern with the information presented by CDSS to medical teams, it is also necessary to consider how these teams use the medical record and report their difficulties. The concept of User Experience (UX) deals with how users interact with a product based on their expectations and needs in relation to the use of the product in question since bad experiences may decrease the use of the tool or even the complete discontinuation. This phenomenon can also occur with CDSS, and in the case of detecting pDDIs, a negative UX can cause low adherence of healthcare professionals to the system’s guidelines.

Given that it is important to verify whether this issue could impact patients’ clinical safety and further develop this tool by focusing on the UX, which is crucial to improving prescriber adherence. In order to better understand the issues related to these CDSS, we sought to conduct a systematic search to learn from the experiences of other CDSS implemented in other countries’ healthcare systems. Other systematic reviews have focused on prescribing errors that can be avoided by CDSSs, although none have focused on pDDI alerts generated by these systems [4].

Given the above, this study aimed to assess the frequency of CDSS generated pDDI alerts in EHRs and evaluate adherence to these alerts through the alert override rate of physicians utilizing this tool to prescribe drugs [5,6].

Literature Review

A systematic literature search was conducted on April 12, 2023, in the electronic databases MEDLINE/PubMed, EMBASE, Web of Science, Scopus, and LILACS, searching for articles published between 2011 and 2023. Only manuscripts published in the English, Spanish, and Portuguese languages were included [7]. We also searched for studies in Gray literature using the Google Scholar database. A pilot search was conducted to define the Medical Subject Headings (MeSH) search terms and strategies, which was validated by three experts (CLC, JFH, and RSW). A list of reference articles (indicated by experts) was used to define the search strategy. The first phase established an investigation according to the features and strategies of each electronic database. The references retrieved from the searches were organized in EndNote reference manager web and Rayyan QCRI (Qatar Computing Research Institute-data analytics, Doha, Qatar) online software [8].

Materials and Methods

Study design

The methodological procedure was defined by following the Preferred Reporting of Systematic Reviews and Meta-Analysis (PRISMA) guidelines. The review question was formulated according to the PICOS approach (problem, intervention, control, outcomes, and study design): “How are physicians’ adherence to pDDI alerts in using electronic health records with a clinical decision support system capable of identifying drug-drug interactions in their prescriptions?” Problem: Physicians’ prescriptions with pDDIs made on EHRs at a hospital and/or primary health care unit. Intervention: The use of a clinical CDSS that can detect and inform DDIs. Comparator: None. Primary outcome: The prevalence of DDI alerts in each setting. Secondary outcome: The prevalence of alert override by physicians. Study: Retrospective (comparative, cross-sectional, case-control, and cohort) and prospective (comparative, cohort) studies reporting the use of a CDSS tool used to identify DDIs at hospitals and/or primary health care units.

Study selection

Independent pairs of reviewers (MF, RAS, DS, and EMD) selected the articles based on the inclusion and exclusion criteria. The inclusion criteria were: Physicians’ prescriptions with pDDIs made on EHRs at a hospital and/or primary health care unit that uses CDSS. The reviewers began by reading the titles and abstracts independently while applying the eligibility criteria, followed by full-text reading, also applying the eligibility criteria. A third reviewer (CLC, JFH, and RSW) cross-checked all the retrieved information in both phases [9]. The final selection was always based on the full text of the publication and according to the PICOS approach. We excluded studies that did not satisfy the inclusion criteria and those that met the exclusion criteria, and the remaining full-text articles were used for the study. We also searched for articles in the reference lists of eligible studies [10].

Data extraction

The articles were randomly distributed to four authors to extract the data independently using the Research Randomizer® software. Data were extracted from the text, figures, or tables and added to a standardized table. In addition, three email attempts were sent to the authors whose studies were included in this review to acquire missing data and provide further clarification on the subject [11]. The extracted data were validated by a pair of reviewers (MF, RAS, DS, and EMD). Any discrepancies were resolved through discussion or consulting a third reviewer. Afterward, the expert group validated the data extracted in the standardized form (Supplementary Table 1). An expert group (CLC, JFH, and RSW) checked the extracted data, and disagreements were solved through discussion. The data were exported to the R software for meta-analysis and quality assessment analysis [12].

Data synthesis

A random effects model for meta-analysis in the Rstudio software was used for statistical pooling of data. The dichotomous outcomes were reported as prevalence ratios (events/total) according to 95% confidence intervals. Meta-analysis was performed using a minimum of three studies, and the pooled prevalence was presented with 95% confidence intervals. Heterogeneity was assessed by calculating I² and Q value tests (I²>50% or p<0.01); the authors also critically evaluated the differences in the methodology of the articles [13].

Quality assessment

Three review authors (MF, RAS, and DS) independently and blindly assessed the methodological quality using the JBI critical appraisal tools. Two reviewers independently and blindly checked a list of questions for each study type and answered “yes,” “no,” or “unclear.” The discrepancies were resolved by review experts (EMD and IGD). The final score for each article was calculated by the number of yes in the total number of quality criteria. The score indicated the quality as low (<50%), average (50–75%), and high (>75%) [14].

Results

The systematic search of the electronic databases resulted in 1,873 articles; after removing duplicated articles, 1,704 manuscripts remained for title and abstract screening [15]. Afterward, 86 articles were considered eligible for full-text reading. The reviewers excluded 70 studies as they met the exclusion criteria of wrong population (n=23), wrong intervention (n=5), wrong outcome (n=18), wrong study design (n=19), foreign language (n=3), and duplicate (n=2). In this systematic review, 16 studies were included for qualitative analysis and 15 for quantitative analysis (Figure 1) [16].

JPPS-studies

Figure 1: Identification of studies via databases.

Characteristics of the studies

Most studies selected for this systematic review were conducted in the United States (n=4). The remaining studies were performed in various countries, including South Korea (n=2), Japan (n=1), Israel (n=2), Germany (n=1), Switzerland (n=1), Belgium (n=1), Spain (n=1), Italy (n=1), Sweden (n=1), and the Kingdom of Saudi Arabia (n=1). All studies were conducted from 2012 to 2022; the oldest ones were carried out by Polidori et al., and Fritz et al., in 2012, and the most recent by Alsaidan et al., and Tukukino et al., in 2022. The follow-up time of prescriptions made by physicians using a Clinical Physician Order Entry (CPOE) varied, with the shortest being just seven days and the longest being 46 months. As for the study design, there was one longitudinal observation study, ten cross-sectional studies, four prospective longitudinal studies, and one study that performed retrospective cross-sectional and prospective longitudinal studies [17].

Regarding the studies’ research settings, two were conducted in three primary care hospitals, six studies in a tertiary care hospital, three studies in a quaternary care hospital, and five studies did not report the healthcare settings, so we counted them as general hospitals. Among these healthcare settings, most employed commercial CDSS and drug information databases (n=12), and four used a system and database, at least partly, developed by the public health system [18].

Main outcomes

Among the studies included in this systematic review, there were different types of reports on using CDSS to detect pDDIs in EHRs. Most of them evaluated the number of pDDI alerts generated by the CDSSs and the volume of alert overrides. Even in studies in which the period of the prevalence of alerts generated by the CDSS was below 10%, over 60% of these alerts were ignored by prescribers [19].

Three other studies only evaluated the number of alerts for pDDIs and reported a high prevalence. Amkreutz et al., compared two software (MediQ and Meona) and found that both showed a high prevalence of DDI alerts. Four other studies only evaluated the prevalence of alert overrides, and one compared three software (Pharmavista, DrugReax, and TheraOpt). These studies also reported a high prevalence of alert overrides. Notably, two studies evaluated the prevalence of alerts and alert overrides before and after adjusting the rules for alerts generated by the CDSS. However, despite reducing the number of generated alerts, these adjustments did not significantly reduce the number of overrides. In addition, two studies reported the incidence of alerts generated and the volume of acceptance before and after adjustments in the drug interaction rules of these CDSS. In these studies, the number of alerts generated after CDSS adjustments was similar or lower than before, and the acceptance of these alerts increased after the intervention.

The authors of the studies differ about the reason for the high prevalence of alerts and alert override, most of them reporting that system adjustments should improve acceptance of alerts and decrease the chances of adverse drug events. Others emphasized that these adjustments must be constant, made by a multidisciplinary team assembled for this purpose, and take into account the characteristics of the hospital sector and patient [20].

Moreover, some researchers reported that only generating alerts for more important interactions can increase acceptance. Other studies only evaluated the generation of alerts and concluded that implementing a CDSS is essential to avoid DDIs. One of the studies evaluated the use of a password for alert override and, due to the high rate of aversion, concluded that including an authentication step can increase the workload and generate alert fatigue. In fact, one study concluded that the reason for the high levels of alert overrides is not related to professional fatigue but to the high number of alerts and that the real reason should be further investigated.

Meta-analysis

Of the 16 studies in the systematic review, 15 were included for quantitative analysis. Meta-analyses were performed separately for the subgroups, which were divided into studies that only dealt with generated pDDI alerts and used the number of alert substitutions. Ten studies were included in the meta-analysis of the prevalence of alerts generated by the CDSSs, considering that 21,435,597 prescriptions were analyzed. The overall prevalence obtained was 13.7% (CI 95% 5.6–24.7%, p-value <0.0001, I2=100%). Among them, one study compared two different software, one study presented the prevalence of alerts generated before and after CDSS adjustments, and another study consisted of two parts, one retrospective and one prospective. Therefore, the respective prevalences appear in the meta-analysis (Figure 2).

JPPS-alerts

Figure 2: Prevalence of alerts generated before and after CDSS adjustments.

Regarding physicians’ adherence to receiving a pDDI alert in their prescription, eleven studies assessed the prevalence of alert overrides; 570,776 prescriptions were analyzed, and the overall prevalence obtained was 90% (CI 95% 85.6%–95.0%, p-value <0.0001, I2=100%) (Figure 3). Among them, two studies assessed the prevalence of disregard for alerts before and after adjustments to alert definitions in the CDSS, one study compared three different software, and another was divided into two stages, one retrospective and one prospective. Therefore, the respective prevalences appear in the meta-analysis (Figure 3).

JPPS-before

Figure 3: Prevalence of disregard for alerts before and after adjustments to alert definitions in the CDSS.

Quality assessment

Most studies demonstrated high methodological quality, one moderate, and one low methodological quality. The questions that most indicated low methodological quality were questions 3 and 8: “(3) Was the sample size adequate?”, and “(8) Was there appropriate statistical analysis?”. The main issues related to low methodological quality assessment across studies were sample size and statistical analysis.

Discussion

Studies have shown that most of the alerts generated at the time of prescription, indicating pDDIs, are ignored by physicians. Although there is no consensus on the ideal number of alerts that should be generated, it is already known that many inappropriate alerts can reduce users’ confidence in the system. In this context, Wickens and Dixon reviewed the literature in search of indirect evidence on this topic to establish a diagnostic reliability value below which automation becomes useless or even worse than the performance before its implementation. This analysis revealed that reliability of 70% was the “cutoff point” below which automation was worse than no automation.

In this review, we evaluated the frequency of alerts generated by these CDSS when physicians prescribe medication for their patients and found a prevalence of 13.7%. We could consider a low value, given the number of drugs that interact with each other and the difficulty of the prescriber remembering all of them. However, even though this tool was designed to help the prescriber, we observed that these physicians ignored the pDDI alerts generated by these CDSS, where the prevalence of alert override was 90%. Thus, it is evident that these systems require adjustments so that the adherence to their pDDI indications increases and the number of alert overrides decreases.

Another issue to consider is the quantity and quality of these alerts; alerts of lesser clinical importance, when generated in excess, may tire these prescribers, also known as alert fatigue. This phenomenon has already been reported previously, showing that this problem can cause physicians to start ignoring these alerts after an excess of generated alerts, increasing the risk of alerts with greater clinical importance going unnoticed. Some studies included in this review evaluated the effect of adjustments in the rules of the clinical decision support system to reduce the number of alerts generated. After these adjustments, only alerts of greater clinical importance were generated; with this, the number of alerts decreased significantly, although the acceptance of these alerts did not increase significantly, and physicians continued to override most of these alerts.

After providing evidence-based information and removing minor alerts, a CDSS requires rigorous evaluation to determine the optimal sensitivity and specificity ratio to reduce patient harm. No system can achieve 100% sensitivity and specificity in a real-world setting. Filtering lists of drug interactions in CDSS databases, keeping only clinically significant pairs, may improve the “alert fatigue” effect but also create liability concerns for clinicians, who could perceive these systems to be at risk of making mistakes. However, using a list of DDI based on consensus between professional societies or relevant regulatory bodies could increase confidence in these systems.

To reduce the number of alerts and increase their clinical relevance, the CDSS should not be used as an independent system but to work with the EHR and cross-check important patient information, such as laboratory test results, information on comorbidities, and other clinical parameters. Therefore, about 30% of alerts could be avoided if only five laboratory test results were integrated into the system, including potassium, white blood cell count, international normalized ratio, therapeutic drug monitoring, and glomerular filtration rate values. Another way to reduce the number of alerts is to direct these alerts based on the specialty of the prescribing physician, for instance, not generating an excessive number of renal risk alerts for a kidney specialist with many years of experience. With this, confidence in these systems tends to increase since physicians ignore alerts because of their lack of specificity; thus, alerts generated for the general population could be changed if the characteristics of patients and physicians were considered.

This problem related to low adherence to alerts by physicians has been studied for over a decade. Horsky et al., conducted a literature review to seek experiences using CDSS in drug prescriptions and their examples of successes, failures, and lessons learned. They claimed that the positive performance of the CDSS and its benefit to physicians could be significantly reduced by poor interface design, incorrect implementation, and inadequate data maintenance, even becoming a burden contributing to medical error. In addition, the specificity and clarity of the alerts, added to the agility to respond to suggestions, are essential to change physicians’ prescribing behavior.

Westerbeek et al., conducted a study to better understand the reason for this high rate of alert overrides; the authors found that the most ignored alerts were related to drug prescriptions and also provided several reasons as to why physicians ignored these alerts. Their researchers’ findings revealed that the most mentioned factors were related to the usefulness and relevance of information, ease of use, and system efficiency. Furthermore, physicians agreed that certain factors inhibited or facilitated use but had different views on achieving this.

For example, clinicians agreed that useful information facilitates use but had different views on what information is useful. These different points of view may be related to the doctor’s specialty, the type of care provided, the location, and the characteristics of the patients these doctors tend to see. The authors suggested that physicians be involved during the development of these systems and that user-centered design may be a suitable method.

The strength of the present systematic review is consolidated in the following points:

•Analysis by paired blinded reviewers,
•Exhaustion of the literature search,
•Data validation,
•Search for expertise.
•Quality assessment.

Nonetheless, this study also had some limitations, including unclear outcomes in some of the analyzed articles; hence, they were estimated by the present article’s reviews. In addition, the rules of alerts issued by the systems varied considerably, and most studies did not provide information about this.

Conclusion

Our results show that prescribers ignore most of the alerts these clinical decision support systems generate since 90% of physicians override them. Even after adjustments were made to these systems to reduce the number of alerts and avoid professional fatigue, the number of alert overrides did not decrease satisfactorily. Therefore, these systems should be developed using UX design techniques, increasing the users’ confidence and satisfaction with the CDSS and possibly decreasing alert overrides, improving the clinical safety of the treatments offered to patients. In this systematic review with meta-analysis, we showed that despite the CDSS being recognized as an important tool to prevent adverse events related to drugs prescribed by doctors in health units, this instrument has been underused, wasting users’ time and money.

Registration and Protocol

This systematic review protocol was based on PRISMA-P and registered in the PROSPERO International Prospective Register of Systematic Reviews under registration number CRD42021261967.

Support Funding

This study was financed in part by the Coordenaçao de Aperfeiçoamento de Pessoal de Nivel Superior (Coordination for the Improvement of Higher Education Personnel)–Brazil (CAPES)–finance code 001, and by Brazilian ministry of health (e-SUS PHC project stage 4). The RSW and EMD are productivity fellow in technology development and innovative extension of CNPq.

Author Contributions

MF, GSL, ICC, JMDO, ETBM, JSR, CLC, and JFH performed the bibliographic search, and IGD and EMD validated it. MF, RAS, DHS, and ICC participated in elaborating the data extraction table. EMD, IGD, and RSW analyzed and critically reviewed the data. MF and JMDO performed the meta-analysis. MF, EMD, and IGD wrote the general text, and each stage was submitted for analysis and review by the co-authors, with their due criticism. MF, RAS, DHS, JMDO, ETBM, and IGD evaluated the quality assessment. IGD and EMD critically reviewed the drafts and subsequent steps. All authors approved the final version of the manuscript for submission. All authors had full access to all data in the study and assumed responsibility for the data’s integrity and data analysis accuracy.

Conflicts of Interest

The authors declare no conflict of interest.

References