Methodological Review
Interface design principles for usable decision support: A targeted review
of best practices for clinical prescribing interventions
Jan Horsky a,b,c,⇑, Gordon D. Schiff b,c, Douglas Johnston f, Lauren Mercincavage g,
Douglas Bell d,e, Blackford Middleton a,b,c
a Clinical Informatics Research and Development, Partners HealthCare, Boston, United States
b Division of General Medicine and Primary Care, Brigham and Women’s Hospital, Boston, United States
c Harvard Medical School, Boston, United States
d UCLA Department of Medicine, Los Angeles, United States
e RAND Health, Santa Monica, CA, United States
f RTI International, Waltham, MA, United States
g Westat, Cambridge, MA, United States
a r t i c l e i n f o
Article history:
Received 26 February 2012
Accepted 6 September 2012
Available online 17 September 2012
Keywords:
Clinical decision support systems (CDSSs)
Electronic health records systems (EHRs)
System design and development
Software usability
Human–computer interaction (HCI)
Patient safety
a b s t r a c t
Developing effective clinical decision support (CDS) systems for the highly complex and dynamic domain
of clinical medicine is a serious challenge for designers. Poor usability is one of the core barriers to adoption and a deterrent to its routine use. We reviewed reports describing system implementation efforts
and collected best available design conventions, procedures, practices and lessons learned in order to provide developers a short compendium of design goals and recommended principles. This targeted review
is focused on CDS related to medication prescribing.
Published reports suggest that important principles include consistency of design concepts across networked systems, use of appropriate visual representation of clinical data, use of controlled terminology,
presenting advice at the time and place of decision making and matching the most appropriate CDS interventions to clinical goals.
Specificity and contextual relevance can be increased by periodic review of trigger rules, analysis of
performance logs and maintenance of accurate allergy, problem and medication lists in health records
in order to help avoid excessive alerting.
Developers need to adopt design practices that include user-centered, iterative design and common
standards based on human–computer interaction (HCI) research methods rooted in ethnography and
cognitive science. Suggestions outlined in this report may help clarify the goals of optimal CDS design
but larger national initiatives are needed for systematic application of human factors in health information technology (HIT) development. Appropriate design strategies are essential for developing meaningful decision support systems that meet the grand challenges of high-quality healthcare.
2012 Elsevier Inc.
Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1203
2. Background. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1203
2.1. Retrieval and selection of articles considered in this review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204
2.2. Document organization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1205
3. Recommended attributes of clinical decision support in EHR systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1205
3.1. Consistency of design concepts, visual formats, and terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1206
3.2. Flexibility of interaction, workflow integration, and rapid alert-response action. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1207
3.3. Presentation of advice in a way that cultivates trust over time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1207
3.4. Advice: assessment, suggestion and recommendation – not commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1207
3.5. Maintenance and re-use of intermediate variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1207
1532-0464 2012 Elsevier Inc.
http://dx.doi.org/10.1016/j.jbi.2012.09.002
⇑ Corresponding author at: Clinical Informatics Research and Development, Partners HealthCare, 93 Worcester St., Suite 201, Wellesley, MA 02481, United States.
Fax: +1 781 416 8771.
E-mail address: [email protected] (J. Horsky).
Journal of Biomedical Informatics 45 (2012) 1202–1216
Contents lists available at SciVerse ScienceDirect
Journal of Biomedical Informatics
journal homepage: www.elsevier.com/locate/yjbin
Open access under CC BY-NC-ND license.
Open access under CC BY-NC-ND license.3.6. Periodic review of system inferences and human actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1208
3.7. Clinical data standards, interoperability, integrity and robust architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1208
3.8. Innovation and third-party developers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1208
4. Decision support for electronic ordering and medication prescribing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1209
4.1. Consistency of terms and text formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1209
4.2. Concise and unambiguous language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
4.3. Appropriate representational formats of data and distinctive entry screens. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
4.4. Organization of orders by problem and clinical goal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
4.5. Clinical context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1210
4.6. Interventions embedded in forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
4.7. Complex order forms and calculators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
5. Design specifics for alerts and reminders. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
5.1. Interruptive vs. non-intrusive alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1211
5.2. Display and organization of reminders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212
5.3. Filtering of frequent interruptive alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212
5.4. Revision of alert trigger rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1212
5.5. Prompts for patient record maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213
6. Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213
7. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1213
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1214
1. Introduction
Health care professionals have been increasingly migrating
from paper patient charts to electronic health records (EHRs) with
clinical decision support (CDS) [1,2]. EHR systems have been
shown to make medical care safer for patients and of higher quality, at least in some implementations, primarily through the use
of CDS [3–8]. The expected benefits of health information technology (HIT) gave rise to national legislation intended to increase
the use of EHRs by clinicians, namely the Health Information
Technology for Economic and Clinical Health (HITECH) part of
the American Recovery and Reinvestment Act of 2009 [9]. As a result, at least one form of CDS is required to be functional in EHRs
by the end of 2012. The U.S. Department of Health and Human
Services further provides financial stimuli for the ‘‘meaningful
use’’ of certified technology to achieve health and efficiency goals
in the first phase of the Medicare and Medicaid EHR Incentive
Program [10].
Developing highly effective decision support, however, represents a serious challenge. Design of alerts, reminders, and other
types of intervention requires detailed analysis of numerous factors that affect their accuracy, specificity, clarity, clinical relevance
and, in turn, the ability of clinicians to take timely and appropriate
actions in response. Teams of professionals engaged in developing
EHR human interfaces often include informaticists, clinical experts
and sometimes human–computer interaction (HCI) specialists, but
the complexity of the task requires a diverse set of skills and
training they may collectively lack. For example, user-centered
design process includes assessments of human performance,
ethnographic observations, analyses of interviews, think-aloud
studies and cognitive tasks and the description of mental models
in addition to medical and computer expertise. A compendium of
best practices, methods and design recommendations may help
in providing reliable and evidence-based guidance.
To illustrate the design complexity, evidence shows that to
maximize effectiveness, advisory information needs to be delivered to the appropriate clinician at the time he or she is making
a decision, has to include content that is relevant in the context
of the clinical task in a concise form that allows quick and unambiguous interpretation, and must provide response options whose
effects are clearly understandable [11,12]. The visual salience of
interventions must also be carefully calibrated according importance and work environment conditions [13]. Alerts should not
create unnecessary distractions but still be distinctly noticeable
when warning about events that may negatively affect patient
safety.
However, current EHRs often fall short of delivering in full the
benefit of readily available, compiled and tailored relevant medical
knowledge regarding the patient at hand to the clinician, and fail to
achieve high performance levels because their human interfaces
are not optimally designed for efficient interaction, do not display
medical data in appropriate context or in formats that lower cognitive effort required to interpret them correctly or because they are
not integrated well into clinical environments and personal workflows [14]. Poor usability continues to be one of the leading obstacles to CDS adoption and a deterrent to routine use in clinical
practice [15].
We reviewed published reports describing EHR and CDS system
implementation efforts and collected the best available design conventions, procedures, practices and lessons learned in the process.
This empirical and sometimes anecdotal evidence was evaluated,
interpreted and synthesized with established HCI principles and
complemented by recommended design practices from software
usability literature. The focus of this review was CDS for medication prescribing and although it also discusses the design of interventions in systems related to this clinical task, the lists of
recommendations are not exhaustive.
2. Background
Decision support attempts to assist clinicians with diagnostic
and care decisions by displaying relevant and often patient-specific
information at various points in the course of care. The most recognizable type of intervention is an alert triggered when conditions
encoded in clinical rules and algorithms are met; for example, if
a medication for which a patient has a recorded allergy is prescribed. A reminder may include in the logic temporal considerations to prompt for due immunizations or periodic laboratory
tests. Alerts and reminders (terms sometimes used interchangeably) are the most common forms of decision support although
medical and procedural knowledge may also be provided to clinicians in the form of electronic guidelines, order sets, calculators,
reports and dashboards, documentation templates and diagnostic
or clinical workflow support tools. An example of selected common
types of decision support interventions is in Table 1 [12,16,17].
J. Horsky et al. / Journal of Biomedical Informatics 45 (2012) 1202–1216 1203One way to classify CDS systems is according to the way clinicians interact with interventions [18,19]. Passive forms of advice
are initiated on demand (or ‘‘pulled’’) by clinicians at the time of
their choosing by clicking on links that lead to pages and static
documents (e.g., electronic guidelines) or on algorithmic ‘‘infobuttons’’ that formulate queries with specific data from a patient record and retrieve contextual information from remote databases.
Although minimally disruptive of workflow, clinicians have to recognize their need for advice. Active interventions such as alerts, on
the other hand, are ‘‘pushed’’ by the system to the clinician automatically for real-time critiquing of clinically significant actions
(e.g., ordering), warning about events and data that indicate a current or imminent negative change in the patient state (e.g., abnormal laboratory results), as reminders about due care (e.g., annual
tests) or to increase regulatory compliance, quality assurance and
administrative initiatives [20]. However, their most common function is to assist with medication prescribing by checking dose and
frequency values and by monitoring interactions with other drugs,
diseases and allergies [21].
Interruptions of ongoing activity and diversion of attention
from clinical work may be appropriate for warnings about highseverity conditions but may quickly become an irritant, hazardous
disruption when used inappropriately for frequent alerts about
minimally important, contextually irrelevant or false-positive
events [22–24]. Alert design that reflects and controls the level of
intrusiveness can mitigate the negative effects of excessive alerting
and work interruption by adjusting its saliency to risk severity.
An indirect kind of decision support fosters optimal decisionmaking in a more subtle manner by focusing attention on specific
information, encouraging systematic consideration of data and
possibly influencing prioritization of actions [16]. A substantial
amount of medical knowledge can be embedded in the design of
information displays, entry modules, templates, order sets and in
the visual representation of clinical data [25,26] that help clinicians better comprehend patient’s current physiological state
[27]. For example, relevant test results, medications, allergies and
problems may be displayed within medication order screens to
clarify the clinical context and to add justification to suggested
modifications. Templates and forms may automatically populate
fields with data from the patient record or from clinical calculators
and serve as checklists to minimize errors of omission. Order sets
give implicit advice by showing a selection of the most appropriate
interventions for specific therapeutic, diagnostic or procedural
tasks. Medical knowledge embedded in the design tends to recommend actions without overly prescriptive directives and allows clinicians to freely consider relevant information during medical
reasoning.
The quality of HIT design and human–computer interaction
characteristics of its interfaces are among the most decisive factors
determining the effect of CDS on care and patient safety by influencing the adoption rate and routine use by clinicians [28]. It is
essential to engage clinicians in the design process from the earliest stages and frequently evaluate functional, interactive, cognitive
and perceptual characteristics of the system so it can optimally
support clinical work [29,30]. However, there is scant distilled,
up-to-date CDS design guidance that can be referenced and readily
used today by commercial and academic development institutions.
Specific best practices and standards of design and usability testing
of HIT products are not readily available [31] and many vendors
therefore use their own, proprietary guidelines that vary widely
in recommendations and the quality of research evidence.
2.1. Retrieval and selection of articles considered in this review
Design lessons and best practices discussed in this report were
derived from peer-reviewed and trade literature that had the
implementation of CDS rather than its design as the primary topic
because of the paucity of published reports directly describing system interface design. We conducted a targeted literature review to
identify and evaluate statements about design by clinicians and
implementers. Those judged to be consistent with widely accepted
HCI and usability principles were then reformulated as recommended practices. The search scope was limited to the last
15 years to focus on issues relevant to contemporary technology.
We searched PubMed and Web of Science databases for articles
containing keywords and free-text terms that we categorized into
four thematically related groups described below (Systems, Activity, Usability, Cognition). We then combined the sets of references
resulting from searches in each category in the final search with
the AND operator. Search terms were entered as keywords in all
PubMed fields joined with the OR operator although a subset related to types of HIT in the Systems category was used to search
Medical Subject Headings exclusively. In addition, we ran similar
Table 1
Common types of decision support interventions.
Intervention Description
Alerts, reminders Provide real-time notification of errors, potential hazards or omissions related to interactive events (e.g., the submission of a new order), in
response to new data (e.g., laboratory results) or to the passage of time (e.g., reminders about due care)
Ordering support Real-time critique of medication and procedure orders may suggest drug alternatives or appropriate dosing based on patient characteristics
(kidney and liver function, gender, age, weight), alert to drug, allergy and food interactions, and to duplicate therapy or formulary adherence.
Order sets for a specific purpose such as a hospital admission, problem-oriented ambulatory visit or a medical condition may aggregate orders for
procedures, radiology, laboratories and medications. Complex ordering tools may include forms with built-in calculators and guided dosing (e.g.,
for total parenteral nutrition, weight-based heparin, etc.)
Guidelines Clinicians may browse electronic documents or obtain patient-specific recommendations with infobuttons that use the patient record to perform
context-based searches in one or more knowledge bases and summarize the retrieved information for automatic display. Automatic critique of
medication orders can also be derived from institutional or national guidelines
Forms, templates Guide documentation, care planning, the ordering of tests and procedures, etc. They can be embedded in screens with contextual patient
information, activated on demand or bypassed when not needed
Clinical context Clinical reasoning is less cognitively demanding when data are aggregated and presented in formats that visually emphasize relationships and
dependencies, allowing fast perceptual judgments. Complete sets of relevant information on one screen also reduce the likelihood of omission
errors. Medication order screens may include allergies, relevant lab results with trends, corollary orders, formulary status and costs. The context is
embedded in layout and therefore is minimally intrusive
Clinical pathways Adherence to best practices can be supported by allowing step-wise processing of complex protocols over time and multiple patient encounters
such as pneumonia admissions or multiday and multi-cycle chemotherapy treatments. These interventions may be more or less intrusive
depending on clinical task complexity
1204 J. Horsky et al. / Journal of Biomedical Informatics 45 (2012) 1202–1216searches in PsychInfo, Books @ Ovid and ACM Digital library
databases and added the results into the final set of references.
Systems – Medical Subject Headings: Medical Records Systems,
Computerized; Decision Support Systems; Clinical Hospital Information Systems; Reminder Systems; Ambulatory Care Information
Systems; Clinical Laboratory Information Systems; Clinical Pharmacy Information Systems; Decision Making, Computer-Assisted;
Electronic Health Records; Electronic prescribing.
Keywords: EHR; electronic health record; electronic patient record; electronic medical record; electronic prescribing; clinical
computing; CPOE; computerized prescriber order entry; computerized prescriber order entry, physician order entry; provider order
entry; electronic ordering; computerized ordering; CCDS; CDS;
decision support; clinical decision support; computerized clinical
decision support; alert system.
Activity: design; development; implementation.
Usability: usability; usability principles; HCI; human–computer
interaction; CHI; computer–human interaction; information design; cognitive engineering; adaptive display; cognitive workload;
cognitive effort; UI; user interface; human interface; user-centered;
human-centered; cognitive analysis; cognitive task analysis.
Cognition: Clinical decision-making; clinician decision making;
medical decision making; medical reasoning; physician decision
making; provider decision making; physician reasoning; provider
reasoning.
Results from all searches were aggregated into a final set of
1544 references. Three researchers (JH, DJ, LM) reviewed the abstracts of these articles for comments about design or implementation of CDS and 421 relevant articles were entered into a database.
Two researchers (JH, DJ) further reviewed these to identify content
that could be extracted as potential recommendations and selected
112 articles for analysis and inclusion in this report. As the primary
focus of most reviewed articles was not a direct description of system design, our inclusion criteria were limited to the identification
of concepts related to human interfaces, usability, safety and positive and negative findings from implementation efforts that referred to system design. Recommendations were discussed in
conference calls and revised by manuscript reviews until consensus was reached. Several articles by authors identified by the
search who published further work on relevant topics in 2011 were
also reviewed and added.
2.2. Document organization
The report is divided into three parts describing different aspects of CDS design and organized approximately from general to
more specific. Section 3 contains advice related to interface
usability, visual attributes of data presentation, maintenance and
interoperability of entire EHR systems with decision support. Section 4 is focused on the design of medication prescribing interventions and Section 5 provides design specifics for alerts and
reminders. Recommendations derived from general usability concepts and heuristics may be applicable to more than one system
and individual sections may therefore have some content overlap.
3. Recommended attributes of clinical decision support in EHR
systems
Clinical decision support is typically functionally embedded in
electronic health records and medication prescribing modules. A
conceptual division between CDS and the application that invokes
it is not always clearly defined [32]. The whole system has generally consistent visual and functional characteristics such as screen
layout conventions, buttons, dialog boxes, entry modules and other
interface artifacts across all component modules. The discussion of
optimal CDS design attributes in the following sections emphasizes
decision support interventions but is applicable to clinical information systems in their entirety.
The goal of having vendors and institutional developers adhere
to common design approaches is to produce systems with analogous sets of basic characteristics that are derived from human–
computer interaction research and are based on proven usability
principles routinely followed in other domains [33]. Clinicians
could then expect reasonable continuity in the visual presentation
of identical or similar medical data and consistency of interactive
behavior across the systems they may be using concurrently, even
if developed by different vendors. Implementers and system architects could then rely in integration planning on the assumption
that some modifications will be possible to make in order to conform the system to the constraints and demands of specific clinical
environments. For example, systems may always allow administrators to control alert salience according to the message importance level and implementation environment or to adjust the
rules for alert activation over time to achieve and maintain optimal
level of performance.
Usability measures such as effectiveness, efficiency and subjective satisfaction characteristics should be included as criteria in the
procurement of EHR systems to help in the selection process [34].
Table 2 lists questions clinicians may pose to vendors when considering a purchase, with further details elaborated in the sections
below.
The largest effort to date at setting design standards, guidelines
and providing toolkits for developers is the Common User Interface
(CUI), a joint project of the National Health Service (NHS) in the
Table 2
Questions for vendors about general system attributes related to usability and safety.
Attribute Question to a vendor
Clinical setting Intended for ambulatory, general inpatient, emergency or critical care?
Are intervention types matched to each setting?
Can modifications be made to accommodate local variations?
Clinician roles Are physicians, nurses, administrative staff affected differently?
How extensive is required training for each role?
Workflow changes Are established professional role responsibilities redistributed?
What may be the unintended consequences of a new process?
Intrusiveness Can saliency level be adjusted (tiered) according to severity?
Missing, erroneous EHR data Does the system ‘‘degrade gracefully?’’
How is alert frequency and advice accuracy affected by gaps in patient records?
Activity logs Can system and user actions be accessed and analyzed?
Active, passive interventions Are they appropriate for the intended clinical environment and tasks?
Latency of advice Is the expected time lag acceptable in given clinical context?
Access to data services Can third-party additions be integrated and connected to services?
Consistency Are nomenclatures, terminologies, controls and design concepts consistent?
J. Horsky et al. / Journal of Biomedical Informatics 45 (2012) 1202–1216 1205United Kingdom and Microsoft [35]. It provides guidance on clinical documentation, the entry and display of medical concepts in
forms and their matching to SNOMED CT terms, consistent navigation, icons and semiology, formatting and layout of medication
items, safe identification of patients and on other interface design
areas. Similar initiatives are under way in the United States at the
National Institute for Standards and Technology (NIST) [36,37],
Agency for Health care Research and Quality (AHRQ) [38–40],
Health Information and Management Systems Society (HIMSS)
[12,28,41,42] and elsewhere.
The failure to address system deficiencies deriving from inadequate interface design leading to unintended consequences and errors that propagate through networked systems built according to
different or discordant design principles represents a serious and
unresolved set of problems [43]. These deficiencies may lead to
the weakening of the safeguards that HIT provides against medical
error and may even increase risk for patients under certain conditions [44]. An analysis of patient safety incidents reported over two
years by clinicians in one hospital found that human factors accounted for half of the problems in which information technology
was a significant factor [45]. Evidence from published reports on
the incidence of HIT-related medical errors suggests that deficient
interface design may cause or contribute to decreased cognitive
performance of clinicians [46,47], complicate medication dosing
(for example of potassium chloride) [48–50], engender unsafe
workarounds [51,52], fail to reduce the number of adverse drug
events in an EHR lacking appropriate dosing decision support
[53–55], facilitate medication error risk [56], exacerbate poor responses to medication safety alerts (in a simulated study) [57]
and increase duplicate medication order errors [58].
Evidence from over three decades of research and design of devices and software in safety–critical industries such as nuclear
power, military and commercial aviation is extremely encouraging
in showing that significant advances in safety can be achieved
through human factors methods, cognitive engineering and strong
usability practices in design. The United States government now requires human factors analysis, evaluation and testing for all procured systems and issues directives and guidelines for best
practices applicable to defense (US Army Manpower and Personnel
Integration Best Practices) [59], nuclear power (HFE Program Review Model by the Nuclear Regulatory Commission) [60], commercial aviation (Human Factors Design Standard by the Federal
Aviation Commission) [61] and medical devices (Pre-Market Approval of Medical Devices Best Practices by the Food and Drug
Administration). A summary of recommended design principles
and desirable EHR system characteristics discussed in this section
is in Table 3.
3.1. Consistency of design concepts, visual formats, and terminology
An effective design convention to minimize errors of commission (e.g., the confusion of screen objects that look similar) and
to increase the speed of target recognition in visual searches by
allowing quick perceptual judgments is to make conceptually similar items share salient attributes and dissimilar items appear
clearly distinct for easy differentiation [62]. For example, order entry screens and modules for all fluids should have the same background color and layout of fields that are distinctively different
from orders for intravenous medication and medicated drips.
Visual cues (e.g., font, color, placement) indicating abnormal test
results need to be identical across all interconnected systems so
that clinicians do not have to interpret their meaning but can use
faster perceptual judgment. Similarly, the method of responding
to interventions needs to be uniform with contextual information
and navigational controls presented consistently in all connected
systems [63]. Text entries in forms and tables should always be
aligned into basic geometric forms that visually connect items
belonging to the same group and form easy to follow hierarchies
of related entries [64].
All terms, including the names of laboratory tests, procedures
and order sets, need to be used consistently across menus, lookup
tables and in advisory messages generated by decision support
interventions to minimize errors of misidentification and delays
in interpretation and visual lookups. A nomenclature of conceptual
categories should also be consistent and unambiguous. For example, if alerts are classified into three severity tiers such as ‘‘critical,’’
‘‘significant’’ and ‘‘caution,’’ the same terms need to be used in all
alerts, messages and textual references. Similarly, consistent color
coding, use of highlights and fonts and visual hierarchies needs to
be maintained within all modules of a system and preferably
across all interoperable systems [65,66].
Prompts and instructions should employ a consistent wording
style. The infinitive construction, which starts with the expression ‘‘To [accomplish the stated goal], [do this]’’ is effective in
preventing errors associated with acting in response to a prompt
before reading the consequence of the action. For example,
rather than stating ‘‘Press the Continue button to override,’’ the
explanatory message should read ‘‘To override, press Continue’’
[65].
Consistent use of identical design concepts within and across
systems is essential for shortening the time required to gain
interaction proficiency, and for lowering cognitive effort and
mental fatigue that contribute to eventual misses of significant
warnings or to overlooking data indicating abnormal and critical
patient states.
Table 3
Summary of desirable system attributes.
Attribute Recommendation References
Perceptual grouping and data
relationships
Same visual attributes of related items. Distinct appearance of dissimilar objects [62–64,66]
Consistent terminology Tests, procedures, orders and sets, alerts, menus should use consistent language [65,91,98]
Consistent wording [To]..[do this] Desired outcome comes first, followed by action [65,67]
Acceptable density of information
on screen
Segment long lists, tables to short groups, use blank space to aggregate, separate [11,67,98]
Workflow integration Appropriate sequence of screens, context, type and timing of advice by clinical task [6,11,12,19,64,67,68,73]
Cultivation of trust Avoid black box advice, maintain high specificity, context, justification [11,69–73]
Advice rather than commands Highlight potential problems, safety hazards, suggest actions – not directives [74–77,79]
Alternatives rather than stops Suggest alternatives to audited actions, provide direct links to carry them out [6,11,78]
Intermediate states Maintain clinical state variables from aggregated data, use refine to decision logic [73,80]
System logs Allow access to logs, analyze periodically to increase specificity, sensitivity of alert rules [19,49,81]
Interoperability and data standards Normalize ‘‘source of truth’’ data to common representational formats, reconcile multiple medication,
problem, allergy lists
[82–86]
Patient record maintenance Facilitate manual corrections or additions of data in the EHR as part of response actions to alerts [6,17,19,74,84]
Graceful degradation Missing, outdated, erroneous or contradictory data must not result in incorrect advice or lower safety [89]
Third party access Allow certified companies access to data services, interface development; separate code and content [90]
1206 J. Horsky et al. / Journal of Biomedical Informatics 45 (2012) 1202–12163.2. Flexibility of interaction, workflow integration, and rapid alertresponse action
A primary concern of many clinicians is the speed of completing
tasks. Thus, they need to rapidly receive advice and take an appropriate action at a convenient point in the workflow without extraneous effort or delay [11]. While the acceptable screen transition
time is well under a second, the overall perceived difficulty of
interaction is directly related to the number of clicks required to
respond, the complexity of menus to navigate when entering
coded data and the time and cognitive effort required to find items
on the screen (e.g., finding the most recent lab values in a long entry list).
Interaction complexity can be alleviated by carefully managing
the density of screen information, segmenting long tables and lists
into sections separated by headers that can guide visual searches
more directly to the target item (i.e., allowing to skip entire groups
without reading each item) [67] and anticipating follow-up steps
in common workflows by providing shortcuts or prioritizing the
placement of screens and modules in expected sequences. For
example, a reminder to administer a scheduled immunization
should be followed, after acceptance, by an ordering screen that
contains a review of other immunizations that may be due, allergies, medications or other contextual patient data, or a link to schedule an appointment if the vaccine is not given at the present time.
Tight integration with clinical workflows [6] and presentation of
relevant advice at the time and place of decision making are key
to effective performance [19,68]. Clinicians are more likely to accept and routinely use an electronic system that meets their expectations of flexibility, individuality of advice, and reliability [69].
Optimal CDS performance is thus determined by appropriate
matching of interventions to specific clinical goals, workflow contexts and clinical environments. Matching criteria and further details on aligning decision support with clinical goals can be found
in the HIMSS CDS Implementation Guide [12]. The authors describe
a six-stage model that consists of identifying stakeholders and
determining CDS goals, cataloging existing HIT infrastructure,
selecting interventions for workflow objectives, validating the proposed plan, testing interventions and finally evaluating the effects
of interventions on care.
3.3. Presentation of advice in a way that cultivates trust over time
Clinicians must be convinced that the advice is accurate and relevant and will contribute to improving prescribing safety, quality
of documentation and efficiency [70]. Visual, cognitive and interaction characteristics of CDS interventions can determine whether
clinicians will regard CDS as effective and useful clinical tools or
rather as unwelcome and irrelevant intrusions [71]. For example,
high specificity and relevance of alerts is crucial for developing
confidence in the ability of the system to make accurate and comprehensive inferences about a patient’s medical state, take into account appropriate clinical context, present the advice at the
appropriate level of urgency and suppress irrelevant messages
[72]. Interventions should unobtrusively, but effectively, remind
clinicians of things they have truly overlooked, support corrections,
and present key data and knowledge in appropriate context so the
right decisions are made in the first place [73].
Systems need to avoid the impression of a ‘‘black box’’ giving
advice that cannot be subjectively evaluated. An explanation of
medical logic, including formulas for calculating values, should
be accessible on demand so that the justification for alerting is
transparent and verifiable. A drug alert suggesting a dosing change,
for example, may include a link to additional details on why the
advisory was shown, further supporting evidence from academic
literature and guidelines, and a contact to local authorities responsible for the explanation of CDS rationale [11].
Frequent selective overrides of specific alerts by many clinicians
may be an indication of several kinds of design problems: poor design of response mechanism (e.g., inconvenient ways to accept it),
possible flaws in medical logic or a sign of inaccuracies in the patient record. The assumption that the problem of frequent overrides lies with ‘‘noncompliant clinicians’’ should be resisted.
A periodic review of underutilized or ignored advisory interventions may point to gaps in rules logic or to inadequately updated
patient data that trigger irrelevant alerts. For example, there may
be medications in the patient record that have been discontinued
or those with their course already completed [74]. A clinician
may be aware of that fact and therefore correctly consider the advice inappropriate. Such events work against cultivating trust in
CDS.
A broader strategy to avoid distrust in the relevance of decision
support is to avoid recommendations that are controversial.
Rather, advice should be given only for aspects of care in which
there is little disagreement on appropriate management [75].
3.4. Advice: assessment, suggestion and recommendation – not
commands
Health information technology, to a certain degree, codifies expert knowledge and problem solving that are the core competencies of healthcare professionals. Some clinicians may perceive
computer-generated, overly prescriptive advice infringing on their
sense of professional autonomy [76]. Systems should therefore formulate advisory messages in the manner of highlighting potential
or actual problems that require attention and suggest therapeutic
opportunities rather than imposing strict, inflexible and unsolicited dictates [77]. However, merely giving an assessment without
recommending an action and providing a convenient way to either
carry out or disregard it is generally not an effective way to change
behavior [6,78]. For example, an intervention may recommend that
a clinician prescribe an antidepressant and include a justification
with supporting research evidence rather than simply suggest that
the patient is depressed.
Physicians may strongly resist a suggestion not to carry out an
action when an acceptable alternative is not offered [11]. However,
alerts are also often perceived only as cues for considering other
options rather than as highly reliable information sources whose
recommendations should always be followed [77].
Suggesting equally effective and appropriate alternative actions
is complex and not always possible. In medication prescribing, for
example, the recommended options need to include the alternative
drug and its dose and frequency appropriate for the patient’s clinical context. As the actual indications may differ from those considered by the decision logic, contextual information from the record
needs to be evident to support the relevance of the advice [79]. For
example, the last measured serum creatinine levels, presence of
other interacting drugs and the standard dose may be included
on the alert as contextual information.
3.5. Maintenance and re-use of intermediate variables
Electronic representations of a patient’s physiological and clinical state can be algorithmically derived from data stored in the record. These ‘‘intermediate states’’ (or state variables) are inferences
from primary data that can be conceptualized in clinically relevant
terms and further used in decision logic. Clinicians often find these
state variables to be more intuitive and convenient for reasoning
than single data points [80]. They may be monitored and automatically updated over time to reflect changes in laboratory results,
medications, problems, procedures, passages of time and other
J. Horsky et al. / Journal of Biomedical Informatics 45 (2012) 1202–1216 1207data. For example, a clinical rule may trigger reminders for a Pap
screening test for female patients, with varying frequency depending on age. An entry of a hysterectomy procedure, however, may
create a ‘‘post-hysterectomy state’’ variable that suppresses further
reminders. More complex patient states (e.g., ‘‘patient is on anticoagulation therapy’’) can be created with sophisticated data-driven
derivations that trigger more extensive and more specific interventions [73].
3.6. Periodic review of system inferences and human actions
Managing the specificity, sensitivity and other performance
characteristics of alerts is necessary to help prevent excessive
alerting and is often possible by analyzing detailed logs generated
by automatic system audits [49]. System logs should maintain
records of automated inferences and recommendations and of response actions taken by clinicians (e.g., overrides) that are timestamped, identifiable and with sufficient detail to determine which
data, actions and patient or system states triggered each intervention [19]. Automatic monitoring of overrides should notify administrators when a preset threshold for the number of alerts that are
not accepted is reached in any given time period. Such performance data can also provide critical feedback to knowledge engineers developing the clinical logic for the CDS interventions [81].
System performance should be periodically reviewed by analyzing the logs. Descriptive statistics can show the proportion and distribution of alert overrides across clinical services, locations,
individual clinicians and patients. Trends and systematic patterns
in overriding certain alerts may serve as a proxy measure of their
precision and practicality. For example, frequent and consistent
overrides may point to outdated or incorrect criteria in decision logic, irrelevancy for particular clinical context (e.g., in case of medication refills or health maintenance reminders for hospitalized
patients), erroneous data in the patient record (e.g., medication
and allergy lists that are not well maintained) or to inconvenient
design of alerts that are either too intrusive or are activated at a
point in the clinical workflow when the suggested response cannot
be made.
Records showing minimal use of specific order sets may indicate their poor fit to the intended task, inconvenient access (e.g.,
placed in an unexpected menu group or hierarchy level) or that clinicians simply may not know about their existence. Overuse of entry fields such as free-text comments for data intended as coded
entries may be a sign of workarounds employed to bypass difficult
screen controls or poorly designed workflows. Patterns of use that
significantly differ from expectations may be further investigated
by help desk logs and complaints as well as by informal interviews
or discussions with clinicians or by observing their work directly.
Systems should allow easy access to logs and provide tools for
their analysis or let analysts import log data in common formats
into third-party software.
3.7. Clinical data standards, interoperability, integrity and robust
architecture
Decision support algorithms may need to evaluate data from
several sources, such as the patient record, laboratory result repository, pharmacy and other ancillary or legacy systems. Syntactic
and semantic interoperability assures that services can retrieve remote data and that their meaning is preserved for decision support
rules to process them correctly. For example, amylase and lipase
tests are method-dependent but may be stored under the same
name on different systems; white blood count (WBC) designation
can contain results on one system and an order (intent) on another.
Further, when clinical data are aggregated from multiple sources
they may need to be ‘‘normalized’’ into a common representational
format (common or converted units of measurement, reference
range, etc.) and analogous data reconciled by identifying their
‘‘source of truth’’ [82].
Timely maintenance of patient allergy lists and trigger rules can
reduce the number of ‘‘false-positive’’ alerts that have low clinical
value over time. Institutions with several EHRs need to integrate
multiple instances of allergy lists stored on different networked
systems that may contain conflicting or unreconciled information
[83]. Shared lists of allergies, medications and problems for each
patient should serve as singular reliable sources on which the decision support rules operate across the network [84,85].
Pharmacy systems receiving electronic prescriptions should
also have their own automatic drug–drug and drug–allergy checking in addition to decision support built into the ordering system
[86]. The networked systems, however, should share the same clinical context so that pharmacists can better evaluate the appropriateness of each prescription. A summary screen containing key
patient information, for example, may include flags for all alerts
that physicians have overridden, including the rationale and relevant details (e.g., the patient is taking high-risk medications such
as warfarin or monoamine oxidase inhibitors) to minimize the
number of unnecessary callbacks from pharmacists to clinicians
for clarification.
The precision of clinical logic rules that activate decision support interventions depends on the quality of information stored
in the electronic record. This data, however, may be missing, incorrect, imprecise, dated or otherwise unreliable. The inference engine
(i.e., the set of rules and algorithms that generate advice) needs to
function safely in such conditions and prevent the system from
giving inappropriate advice. Designers need to consider how systems detect and respond to erroneous or contradictory data, the effects of missing or incorrect information on the appropriateness of
the advice, and how to facilitate convenient manual corrections of
patient data. For example, a drug dosing calculator that requires
patient’s weight to compute dose adjustment may prompt for
the weight to be entered if missing (e.g., once per session, to avoid
over-alerting) or ask to acknowledge a possibly erroneous entry of
‘‘10 lb’’ [87,88]. However, the rules need to reflect specific conditions of use, such as patients in neonatal units where the weight
of 10 lb would not be considered improbable. Frequently overridden allergy alerts may include a link to remove the allergy from
a list in the patient’s record.
Systems should follow a pattern of ‘‘graceful degradation’’ and
continue to function at a reduced level of performance when components fail or data are not available [89]. Robustness of large networked systems may be reduced by unanticipated interaction of
safety devices built into each sub-system that were designed to
be effective in stand-alone installations but may introduce emergent vulnerability into complex networks without adequate testing. A significant near-miss incident, for example, was attributed
to the interaction of major and minor faults which were individually insufficient to have produced an incident; a drug-dispensing
unit at an emergency department became locked and unresponsive
after a failure in a connected HIS generated a continuous stream of
warning messages, causing, in turn, a significant delay in obtaining
medications for a critically ill patient [44]. Optimal degree of fault
tolerance and acceptance of idiosyncratic human interaction is also
required to maintain safe operation over time.
3.8. Innovation and third-party developers
Institutions and practices depend on the agility of vendors in
making cyclical updates, developing new additions and providing
customized components. These constraints make the adaptation
process of CDS to different clinical environments difficult and protracted. A recent decision by a prominent HIT company to allow
1208 J. Horsky et al. / Journal of Biomedical Informatics 45 (2012) 1202–1216certified third-party developers to write native applications for
their EHR application and thus eliminating the need for specialized
interfaces to connect legacy systems may signal an important
trend in opening previously proprietary, monolithic platforms to
innovation and customization [90]. Offering software development
kits and access to common services is essential for data integration
and allowing their consistent visual representation across all networked systems. The emergence of service-oriented architecture
and the separation of clinical content from the underlying software
code will also allow more customization and sharing of CDS rules
and clinical logic across different systems.
4. Decision support for electronic ordering and medication
prescribing
The most common form of CDS is medication prescribing support during electronic order entry, known widely as computerized physician order entry (CPOE). Clinicians may encounter an
interruptive alert (a modal dialog box) that requires a response
either by acknowledgment and justification of the order as written or by changing to the suggested course before interaction
with the system can resume. A less intrusive approach is to
embed decision support messages in the clinician’s visual field
on the screen for non-urgent notifications so that explicit
acknowledgment and work interruption is avoided. Appropriate
forms of intervention can increase the speed, accuracy and safety
of ordering by guiding clinicians through the interactive process
toward selecting the correct medication or a test in the correct
manner [20].
Order sets (groups of associated orders for a specific clinical
purpose such as an admission) are also considered a form of decision support because they function both as checklists for the completeness of a clinical intervention and as reminders to follow up
and monitor the effects of interventions (e.g., corollary orders for
appropriate laboratory tests), as well as pre-structured guidelines
for dosing regimens.
Recommended design approaches for decision support used in
clinical ordering are described in the following sections and summarized in Table 4.
4.1. Consistency of terms and text formats
Accurate representation of rich clinical discourse in decision
support requires well-defined and consistently applied terminology for observations, assessments and medical concepts. Terms
for adverse reactions, problems, procedures and other activities
or items as well as conceptual categories describing groups of related data should be standardized across all networked systems
and combined with spell-checking and automatic suggestions to
make their entry and lookup faster [91].
Although standard terminology promotes semantic clarity, the
speed of finding specific words or data on the screen by visual
scanning is increased by their consistent appearance. Lists of available medication orders, for example, are often extensive and displayed in long, scrolling tables or drop-down menus that make
the task of finding a specific order strenuous, cognitively demanding and error-prone. Their quick identification can be reinforced by
visual cues in the text formats of drug names that belong to defined categories. For example, generic drug names can be printed
in lowercase while brand name equivalents may have the first letter capitalized, giving each a distinctive visual form [20]. An entire
group of names can then be skipped without reading during lookups. However, printing text in all caps in structured lists, tables or
in continuous sentences and paragraphs should be avoided as all
words become visually more similar and therefore harder to differentiate and read [92].
Items in lists, especially medications, need to be clearly distinguishable from each other to minimize the misreading of lookalike and sound-alike drug names. The Joint Commission has published a register of drugs with similar names that should not be
adjacent in pick lists [93]. They also recommend not using certain
abbreviations, such as IU (International Unit) and IV (intravenous)
that can be confused with one another [94], either in longhand
writing or in screen fonts that are small in size or uncommon. Another useful practice is writing a part of a drug’s name in upper
case letters (‘‘Tall man’’ lettering) to make similar items more visually distinct [95,96]. For example, ‘‘prednisone’’ and ‘‘prednisolone’’
may be written as ‘‘predniSONE’’ and ‘‘prednisoLONE,’’ respectively. A list of drug name pairs with suggested Tall man letters
is issued and maintained by the Institute for Safe Medication
Practices (ISMP) [97]. Research evidence of their effectiveness,
however, is scant.
Legibility of numbers and visibility of decimal points are important for preventing errors of commission. For example, some
screen fonts (especially serif) have similar appearance for the digits
1 and 7 that can easily be confused in time-constrained work conditions or on poorly adjusted monitors [65]. The use of sans serif
font families (Arial, Helvetica and other) is recommended, with
minimal size of 10 or 11 (although exceptions are possible with
caution).
Table 4
Summary of design recommendations for medication ordering.
Design recommendation Example References
Consistent terminology Adverse reactions, problems, procedures, medical concepts, assessments, drug and drug class names [91]
Format text to visually associate drug
categories
Lowercase for generic drugs, capitalize first letter for brands. Avoid lists of drug names in all caps [20]
Emphasize differences in similar drug
names
Avoid adjacent look-or-sound-alike names in lists and excessive abbreviation. Use Tall man lettering [93–96]
Clearly legible font Use sans serif fonts, size 10 or 11 if possible [65,67]
Unambiguous units Use only standard abbreviations placed closely adjacent to values. Include rates for infusions [99]
Manageable pick lists Break long lists into sections separated by space or headers for fast lookup. Avoid excessive variation [65,67]
Multiple entry options Limit time by start and stop times or duration [100]
Concise language Place important words first, details later in the sentence. Display ten words or less and provide a link
to the full text
[73,102]
Representational formats Show trends in graphs rather than tables for contextual data, facilitate temporal associations [14,25,26,49,103,104]
Visually distinct screens for confusable
items
Layout, color of IV fluid orders should be different from medicated drips; visually distinguish ‘‘last’’
and ‘‘dated’’ lab results
[49]
Custom order sets Sets by a procedure, clinical task, problem should be customizable by institutions and clinicians [105,106]
Clinical context Show relevant patient information on ordering screens; use contextual information to refine rules [11,20,49,50,107–110]
Active order forms Content, layout, instructions can be automatically modified by patient-specific EHR data [105,106,111]
Log analysis Periodically analyze overused, underused orders, consistent alert overrides, integrity of entered data [20]
J. Horsky et al. / Journal of Biomedical Informatics 45 (2012) 1202–1216 1209Some systems include dose variants of one medication in lookup lists as a shortcut to allow their ordering with a single click,
without the need to further select a dose, frequency and form in
a subsequent step. However, if the lists are not well maintained
and allowed to grow without limits, this approach may inadvertently increase search time by making visual identification of the
target dose more cognitively demanding and the overall ordering
task longer. For example, there may be a dozen variations on
Amoxicillin orders, such as ‘‘Amoxicillin/CLAV. SUSP 125 MG/
31.25 MG (5 ML).’’ A heuristic recommendation for the display of
search results as lists that have to be further visually scanned for
a target item is to limit them to about 8 to 12 by the time five characters are typed into the search box [67,98]. Restricting drug name
entries to only the most frequent dosing variations is essential for
fast visual lookups.
Units should always be unambiguous, even for automatically
calculated measures, and adjacent to the respective numerical value rather than in a separate table column. For example, entries
that contain multiple descriptive values, such as concentrated electrolytes orders (potassium chloride) always need to include the
rate of infusion in close proximity to volume and concentration
[99].
Complex ordering can be made cognitively easier by allowing
the entry of information in convenient and usual formats that are
automatically parsed into required coded segments. For example,
intravenous drips, medications and time-limited orders for which
duration needs to be specified should allow entering time in hours
or days in addition to start and stop times to prevent errors in time
calculation [100]. The respective date and time entries are then calculated automatically and entered in the appropriate fields.
4.2. Concise and unambiguous language
Brief and clear recommendations are the most effective [11].
Language used in messages needs to be succinct and use instantly
recognizable terms that can be accurately interpreted in a single
reading [67,98]. For example, if the value of a weight-based heparin dose calculated specifically for a given patient is inserted in the
middle of a boilerplate explanatory paragraph, clinicians may dismiss the entire dialog box without reading its content. They may
thus miss a patient-specific recommendation that seems to look
like a generic reminder at a first glance [101]. The most salient part
of a message (e.g., dose, patient weight) should be shown before
any other supporting information is given.
Messages within alerts should be generally shorter than ten
words and accompanied by an immediately actionable item while
a guideline needs to fit on a single screen to be effective [73]. For
example, a recommendation about aspirin use may be worded as
‘‘use aspirin in patients status-post myocardial infarction unless
otherwise contraindicated’’ [11]. A link to further evidence may
also be included as clinicians often contend that more information
should be accessible [102].
4.3. Appropriate representational formats of data and distinctive entry
screens
The representational format of clinical data (e.g., HbA1c values
over several months as a table or a trend graph) is important for
supporting quick perceptual judgments and accurate decisions
[103]. Anticoagulation orders, for example, should display clotting
times (INR) as trends, not as isolated numbers, with medication
dosage displayed in a way that facilitates temporal associations
and trends [14,104].
Screens, modules and dialogs for ordering easily confused interventions should be clearly visually distinguishable by layout, color
or shape and have unambiguous labels. For example, continuous IV
fluid infusions limited by time and medicated IV drips limited by
volume, especially those that include different concentrations,
need to be clearly distinct so the ordering clinician is always aware
of which type of medication delivery is being ordered. Similarly,
when displaying time-sensitive inpatient laboratory results, visual
indicators such as color, font or conspicuous spatial arrangement
should indicate the difference between the most recent available
results and those that may already be ‘‘dated’’ (e.g., the values
may have already changed in response to drug therapy) [49].
4.4. Organization of orders by problem and clinical goal
Individual orders and order sets should be aggregated into
groups and hierarchies according to criteria such as clinical departments, organ systems, clinical diagnosis, clinical goal, condition or
a procedure [105]. They can also be nested in menus corresponding
to clinical problems and common scenarios, such as the admission
of a patient with upper gastrointestinal hemorrhage to the intensive care unit [106]. Individual orders within order sets can also
have some fields pre-populated with common standard doses
and frequencies or provide calculated weight-based doses. This
organization reflects cognitive models corresponding to clinical
states, tasks or problems. Instructions on modifying a set for patients with specific diagnoses or in a particular service should be
accessible via a link.
System designers should provide convenient tools for editing
and structuring selection menus for orders and sets so that ordering modules can be customized to better support common clinical
tasks without the need for programming or relying on vendor services. Organizations should create core sets of orders and maintain
them via periodic reviews by a steering committee while smaller
practices can develop over time a collection of most frequently
used sets. Some practitioners advocate allowing individual clinicians to save and use their own modified versions of order sets.
Such customization was initially considered valuable for promoting clinician buy-in but recently recognized as a potential source
of uncontrolled modifications that can defeat the goal of care standardization and make systematic updates more challenging.
4.5. Clinical context
Ordering screens should display appropriate contextual information from the electronic record such as laboratory values required for dosing adjustments so that clinicians do not have to
navigate away from the ordering screen to see or recall from memory key data. For example, relevant laboratory results may be
shown on aminoglycosides, warfarin and other medication orders
dosed with respect to serum drug levels, physiological markers,
when renal or liver function is affected and when specific values
(e.g., BMI, or absolute neutrophil count) need to be considered [20].
Contextual information should account for the relationships
and correlates between clinically dependent data. For example,
an intervention may suggest lowering a drug dose when kidney
function worsens and prompt for corollary lab or other orders
[11], show allergies, renal function, microbiology results, sensitivities and the unit in which the patient is located when ordering
antibiotics and suggest the best and least expensive brand or a
generic and its dose [107].
Although identical or same-class medications usually should
not be prescribed for a patient, rules that check for multiple orders
must accommodate cases in which such orders are appropriate
while still ensuring safety [108], such as when more than one analgesic is ordered on an as-needed basis or when a patient may require two different antibiotics or more than one type of
anticoagulant. Complex rules may also be needed to inform clinicians ordering potassium chloride (drip or bolus) when the patient
1210 J. Horsky et al. / Journal of Biomedical Informatics 45 (2012) 1202–1216already has another active order for potassium and when there has
not been a serum potassium value recorded in the past 12 hours, or
the most recent value is greater than 4.0 [49,50].
Increasing the variety of information sources that are available
to the decision rules engines to access and consider—medical and
laboratory claims, test results, feedback from physicians, and
self-reported data from patients who are enrolled in disease management or complete health risk assessments—is likely to greatly
increase the specificity and credibility of clinical alerts in the future
and increase the response of clinicians to potentially risky medication [109,110].
4.6. Interventions embedded in forms
Decision support can be structurally embedded into order forms
that either adjust the content according to patient-specific criteria
or prompt for actions and additional information entry. For example, forms may substitute or add gender-specific fields, display or
prompt for renal function or pregnancy status, perform automatic
clinical calculations, enforce policies via field restrictions and audit
for permitted range of entry values or incompatible medication
route selections. Clinicians can be guided through complex orders
with dialogs and forms with selection lists and suggested values
that will populate appropriate fields in the resulting order. Order
dialog forms help create accurate orders and are complementary
to decision support tools such as automated order checks that
are activated only after an order is completed [106]. A key issue
is avoiding errors where such automaticity leads to inattention
and complacency (i.e., failure to review an automatically generated
order).
4.7. Complex order forms and calculators
Some decision support interventions may require more attention and interaction as the complexity of orders increases. Contextual, stepwise instructions and prompts with questions requiring a
response may be presented to clinicians to guide them through the
ordering process of multiple and interdependent medications, procedures and associated tests. For example, a list of indications for
blood test orders may be displayed, followed by a menu of relevant
tests for each indication.
The most complex forms of drug dosing decision support (e.g.,
dosing calculators) should integrate patient-specific laboratory results, active orders, weight, and allergies with complex guidelines
or protocols and present calculated values with aggregated information derived from intermediate variables for decision making
[105]. Wherever possible, the system should perform drug-dosing
calculations on behalf of the clinician based upon accepted algorithms or nomograms to minimize computational errors and to
speed up the calculation step [111]. The algorithm used by the system should be made accessible on demand by a link to promote
trust in the reasoning process.
5. Design specifics for alerts and reminders
Alerts and reminders are triggered by clinical rule engines that
monitor data stored in the EHR, user actions and the passage of
time. Research evidence suggests that clinicians may be exposed
to so many electronic messages that rather than providing assistance, alerts may paradoxically add to cognitive effort and gradually lead to their near automatic dismissal (override) without
reading the message [74,102,112]. This learned behavior is largely
the result of poor alert specificity and low perceived signal-tonoise ratio that limits the ability to differentiate true positive stimuli from false positives (noise) according to the Signal Detection
Theory [113,114]. The behavioral change is sometimes referred
to as ‘‘alert fatigue’’ and applies also to auditory signals coming
from inpatient monitoring devices [115].
There are several design approaches to limit the number of
alerts perceived as having low utility. Rules that trigger specific
alerts can be filtered to suppress low-severity drug–drug interactions, or be prioritized and made more specific by combining patient and provider-specific data [116]. Alerts can also be tiered
into two or three severity levels and presented in more and less
intrusive forms according to importance [13]. Another way of
reducing the total number of messages presented to a single physician is redirecting them instead to pharmacists, nurses or other
staff when appropriate and desirable [20].
Alerts should be sensitive to clinical context by incorporating
more patient-specific data into trigger rules, provide clear, unambiguous information display and carefully calibrate intrusiveness
to be proportional to their level of importance. A summary of these
design recommendations for alerts and reminders is in Table 5.
5.1. Interruptive vs. non-intrusive alerts
Alerts (active CDS) are typically designed as modal dialog boxes
(requiring an action to dismiss) and have one or more buttons and
advisory content. They are inherently and purposely disruptive as
their explicit acknowledgment by a mouse click or a keystroke is
necessary to continue. The interruption of an ongoing activity is
appropriate to get the attention of clinicians when warning about
potentially adverse consequences, such as the probability of severe
drug or allergy interaction with the medication that is being prescribed. However, this option should be reserved only for highseverity warnings and used judiciously [20]. One of the primary
objectives of decision support is to unobtrusively but effectively
remind clinicians of things they have truly overlooked and support
corrections [105]. Two or three severity levels are generally sufficient to assign advisories into appropriate categories of visual saliency and intrusiveness and improve the overall rate of compliance
[13]. They can be designated as ‘‘high,’’ ‘‘moderate’’ and ‘‘low’’ or
simply as ‘‘critical’’ and significant’’ and appropriately color-coded.
Table 5
Summary of design recommendations for alerts and reminders.
Design recommendation Example References
Tiered severity level Synchronous, interruptive alerts should be reserved only for high severity warnings (of 2–3 levels) [13,20]
Concise text, justification Content should be limited to 1–2 lines, with a justification separated by white space [11,64,79]
Clear response options Buttons (order or cancel) with simple labels, action links to additional options (alternatives) [64,98]
Concurrent alert priority Multiple alerts for a single order should be prioritized, deemphasizing low–severity alerts [58,73,117]
Unobtrusive reminders May be designed as flags in names lists; prioritized and color-coded messages in reserved screen areas [17,73,79]
Meaningful color sets 5–6 colors to maintain emphasis effect, matched across all systems. Use color shades for gradients [61,66,67,119]
Text luminosity Dark text on light background and high contrast ratio aid reading – match appropriate color pairs [67,121]
Filtering rules Increase specificity by evaluating more EHR data in trigger rules and suppress ‘‘false positives’’ [74,79,105,123,125]
Curate, revise trigger rules Periodic reviews of frequently overridden alerts by a committee that includes pharmacists [104,126–129]
Prompt for EHR edits Include a link to edit allergy and medication lists in alerts that are frequently overridden [6,17,84]
J. Horsky et al. / Journal of Biomedical Informatics 45 (2012) 1202–1216 1211Interruptive dialogs should have simple and clearly defined response options, such as [Order] and [Cancel] buttons and a very
concise justification. Multi-word button labels and verbose messages make the selection of the intended response by perceptual
judgment difficult and generally do not add clarity [11,64,79,98].
The default action (i.e., completed by pressing the Return key)
should be the desired action [7]. In some instances, when an alternative medication or test is offered, a link (or a button) may be
added that closes the dialog and populates appropriate fields in
an open order form with the suggested values. Reasons for override
may also be prompted routinely so that knowledge engineers trying to determine why some alerts are consistently ignored can review override reasons, and analyze them in conjunction with
activity logs [20].
Multiple simultaneous alerts related to the same order (e.g.,
drug–allergy and therapeutic duplication and a dose alert) may
have a diminished effect if perceived as excessive and distractive.
The usefulness of concurrent alerts needs to be evaluated; those
that do not absolutely contribute to improving the prescribing
process should be suppressed or deemphasized [117]. Those that
remain need to be prioritized by severity so that, for example, a
low-importance allergy interaction does not conceal a therapy
duplication warning [58,73].
Advisory messages of lower importance should be displayed in
a more subtle format to avoid excessive alerting. For example,
interactions between systemic and topical drugs, alerts for drug allergy in case of medication intolerance, and alerts for events to
which an active response is not urgent or possible should be displayed in non-intrusive, asynchronous presentation formats [79].
These can be text messages in sidebars that can be read without
explicit acknowledgment or for the moment ignored [17]. Other
examples of embedded CDS include guidelines for vaccine administration displayed alongside an order set menu for pneumonia and
flu vaccines, listing the most recent results of serum electrolyte
tests displayed with orders for intravenous fluid therapy medications, or dosing weight and pharmacy recommended dosing guidelines for weight-dependent medications [105].
5.2. Display and organization of reminders
Reminders to take certain actions at the present time or in the
near future (e.g., schedule a mammogram) can take several forms.
One approach is to simply add flags to patient names in lists. Short
descriptive messages may also be placed in designated locations on
the screen when an individual patient record is opened. Messages
that prompt for routine actions (e.g., periodic lab tests for patients
with chronic conditions) should be offloaded from physician workflows entirely and redirected to support staff. Reminders can be
prioritized by sort order or by color-coding. For example, red with
progressively more saturated shades may indicate importance or
priority and a color set such as red–orange–yellow, employed consistently throughout the application, may indicate levels of
importance.
The use of color should be consistent and controlled to convey
meaningful concepts such as importance level, hierarchy, category
or to differentiate data with different attributes (e.g., real-time and
historical [65,66,118]). The entire palette should be ideally
matched across all networked applications and not exceed five or
six in total as competing color combinations distract from the saliency (or pop-out effect) of screen items that are intended to draw
attention [37,66,67]. Sufficient contrast (luminosity) needs to be
maintained between the foreground (text) and background colors
which can be often accomplished by using non-saturated hues
for any larger areas of the screen [61,67,119]. Usual and widely accepted cultural and professional conventions should be followed in
the selection of colors to emphasize specific meanings. For
example, red for high priority, danger, stop, error and abnormal
conditions (e.g., laboratory values outside of the normal range),
orange for medium priority, yellow for low priority or minimal
caution, green for completed items or safe conditions, blue for
informational messages, gray and half-tone saturation for unavailable selections or de-emphasized text and black on white
background for primary, standard information. Recognizing that
7–11% of male health care practitioners have some degree of color
vision deficiency, particularly in distinguishing reds and greens
[120], designers should consider avoiding distinctions that use
similar hues of red vs. green [121].
5.3. Filtering of frequent interruptive alerts
Filtering alerts by increasing the specificity of trigger rules may
help to decrease the number of interruptive messages with little
evidentiary basis or clinical relevance or those that are redundant
[74,122]. To date, drug–drug interactions have represented a major
challenge for informatics and pharmacology to both define evidence based standardized alerts and to deliver effectively significance-prioritized warnings. Drug interaction alerts should be
primarily patient-specific by taking into account age, gender, body
weight, allergies, mitigating circumstances, drug serum levels, renal function and comorbidity [123]. For example, standard alerts
related to abnormal renal function should be suppressed for patients on dialysis although it is a nontrivial task to ensure that relevant EHR data are complete and updated. Time intervals between
interacting drugs should also be considered as earlier drugs might
have been completely metabolized. If a potential interaction did
not result in problems for a specific patient in the past, physicians
should be able to suppress the alert for subsequent dose adjustments to avoid redundant messages [79].
Systems should be able to, where appropriate, suppress alerts at
the time of renewal of previously tolerated medication combinations for the same patient [17,74]. However, suppressing a drug–
drug interaction alert after it has been overridden only once per
patient was not favored by prescribers and even less by pharmacists according to one study [117].
There is an ongoing debate whether individual clinicians should
be allowed to turn off specific alerts they consider uninformative
or whether entire classes of alerts could be safely suppressed for
particular specialists, who may not need the same level of support
as generalists. Evidence shows that rather than knowledge deficit,
the cause of most errors is forgetfulness, oversight, and the distracting influence of interruptions, all of which are as prevalent
in the work of most specialists as they are for generalists and residents [105,124]. If specificity is high and alerts are only presented
in potentially unsafe situations, specialists who already know them
do not seem to object to receiving these alerts [125].
5.4. Revision of alert trigger rules
Maintaining high specificity of alerts is dependent on the quality of the knowledge base for triggering rules and the completeness
and accuracy of electronic health records on which the rules operate. Error-producing conditions may exist in commercially available and institutionally developed databases and customization
or periodic reviews are necessary [126]. A committee of physicians
that includes domain experts and pharmacists for drug-related
alerts should periodically revise rules with a focus on frequently
overridden alerts and suggest safe and effective ways for either
suppressing alerts of low value or changing their presentation format [127]. Such reviews may take into account the fact that patients with long-term use of certain medications have
demonstrated their capacity to tolerate them and consider the suppression of alerts for refills [128].
1212 J. Horsky et al. / Journal of Biomedical Informatics 45 (2012) 1202–1216Combining pharmacology and laboratory data into decision
rules provides a powerful tool to guide initial drug choice (i.e.,
drugs where there are laboratory-based indications and contraindications), drug dosing (renal or hepatic, blood level–guided
adjustments), laboratory monitoring (laboratory signals of toxicity,
baseline and ongoing monitoring), laboratory result interpretation
(drug interfering with test) and for broader quality improvement
(surveillance for unrecognized toxicity, monitoring clinician response delays) [104]. For example, when laboratory tests signal
drug toxicity, alerts can be used to limit the extent of an adverse
drug event and enhance the timeliness of interventions to minimize its harm [104,129].
5.5. Prompts for patient record maintenance
Clinicians keep patient records up to date by periodically
reviewing problem lists, medication and allergy lists and other
important clinical information. If a record does not accurately represent what the clinician knows to be true about the patient, many
alerts would appear irrelevant and will be overridden [74]. Corrections can be facilitated by prompting clinicians to update when a
specific alert is being consistently overridden. For example, when
a physician overrides a drug–allergy alert and selects from the
‘‘reason for override’’ menu ‘‘Patient does not have this allergy/Tolerates’’ or ‘‘Patient taking already,’’ a link to edit the allergy list
should be shown in the alert [17]. When reasons are not required
to be entered, an algorithm may decide after repeated overrides to
prompt the clinician and provide a link to open the record for
editing.
Conversely, entering a newly captured allergy should also be
prompted if it can be inferred that an allergic reaction may have
occurred, for example in certain instances when a new order for
diphenhydramine has been written and a drug is discontinued
[84]. Allergy documentation should capture UMLS-coded allergen
and reaction, and discriminate between true allergies and sensitivities or intolerances.
Automatic prompts and requests for entering additional data
need to be used sparingly, however. Clinicians generally resent systems that require extensive data entry at times that conflict with
workflows priorities [6], especially when the data entries may be
perceived as redundant [19].
6. Limitations
This review and the resulting recommendations address mostly
medication prescribing, the most common type of clinical work for
which decision support is available. The design of alerts and
reminders is discussed in detail along with recommendations for
other interventions related to prescribing such as order sets, dose
calculators and also pharmacy systems that may be linked to prescribing systems at clinical sites. Our description of preferred
usability characteristics and interface design extends to EHRs because CDS for prescribing is often functionally embedded with in
those systems and the visual and functional conventions used for
both should be consistent. However, the list of design recommendations is not exhaustive and may not cover all aspects of electronic records. Likewise, other HIT technologies that use CDS
such as diagnostic expert systems, radiology ordering, emergency
and critical care applications and others were out of scope for this
review. Available literature is most often focused on alerts; other
types of decision support intervention may therefore be underrepresented in design recommendations. Clinicians also sometimes
disagree on what is a preferred attribute of CDS. In such cases,
we report the opinions and refrain from making a recommendation. The literature review process itself may have missed some
studies or technical reports that are not indexed in the databases
we searched. Our survey of information contained in conference
proceedings and materials from workshops was limited to those
referenced by healthcare sources online such as the sites maintained by AHRQ, ONC and NIST.
7. Conclusion
Decision support systems are potentially powerful and effective
information tools, particularly when providing well-founded,
unambiguous and actionable advice tightly integrated into clinical
workflows [130]. Their performance level and ensuing tangible
benefit to clinicians, however, can be significantly reduced by poor
or outmoded design, incorrect implementation and inadequate
data maintenance. Rather than increasing the quality and safety
of care, inadequately designed systems may become disruptive
and give only marginally relevant guidance that is largely ignored
or, at worst, irritates and impedes the flow of cognitive and clinical
work [74]. Usability is currently one of the primary concerns of clinicians and a key factor in their decisions about adopting HIT for
routine use.
Designing information systems for a domain as complex as
health care is challenging and few guidelines exist for developers
to follow common, effective and safe practices. Several decades
of development and use of IT in other safety–critical industries
shows, however, that significant advance can be achieved by systematic focus on human factors and user-centered design.
Published findings reviewed in this report suggest, for example,
that consistent design concepts and appropriate forms of visual
representation of clinical data, the use of controlled terminology
and the delivery of advice at the time and place of decision making
collectively lower the necessary cognitive effort for effective interaction and shorten the time needed to acquire use proficiency. Evidence-based matching of interventions to clinical goals and their
tight integration with workflows increase the likelihood that the
provided advice will be followed. High specificity and relevance
of alerts and their parsimonious use have an outsize effect on the
perception of CDS as a valuable and trustworthy reference source
that clinicians may develop over time. Their prescribing behavior
is also more likely to be altered by advice formulated as suggestions highlighting potential problems rather than as rigid commands. Excessive alerting – the most frequently expressed
reason for dissatisfaction with current forms of decision support
– can be mitigated by periodic review of trigger rules, analysis of
performance logs and good maintenance of health records such
as allergy, problem and medication lists on which decision engines
partly rely.
Further studies that directly describe and analyze the relationship between more or less optimal interface design characteristics
and criteria such as the rate and severity of errors or the effect on
prescribing behavior are needed to understand what forms of CDS
are the most effective in specific environments and what processes
can designers follow to appropriately match interventions to clinical tasks. Future trends will move toward collecting empirical data
on human performance that could guide the development of adaptive systems that anticipate errors, respond to them, or substitute
less serious errors that allow clinicians to intervene before adverse
events occur [131].
The competitive advantage of highly usable and reliable HIT
systems has long been recognized by many in the industry. However, several large vendors in a recent AHRQ survey acknowledged
that practices such as formal usability testing, user-centered design process and routine involvement of cognitive and usability
engineering experts are not common [31]. Institutional and private
purchasers also place high value on usability characteristics but
J. Horsky et al. / Journal of Biomedical Informatics 45 (2012) 1202–1216 1213often are not sufficiently informed to recognize even substantial
design flaws until they become apparent well into the implementation process [34].
Academic and commercial HIT developers need to adapt their
current design practices to include formal user-centered, iterative
design processes that share common standards and recommendations. Their innovation efforts should reflect HCI research methods
rooted in ethnography and cognitive science that rely on models
derived from evidence gathered in authentic clinical environments
and from insights gained by observing clinicians doing their work
rather than on purely laboratory-based design ideas divorced from
daily practice. Suggestions outlined in this paper may help clarify
the goals of optimal CDS design in particular but larger national
initiatives like the nascent NIST guidelines for systematic application of human factors in system development process [36,37,132]
are needed to advance the HIT design field in general.
Employing appropriate design strategies and adhering closely
to principles of optimal human–computer interaction is crucial
for creating systems that are responsive to the dynamic character
of clinical work. It is therefore essential to understand the cognitive mechanisms of errors in order to develop effective HIT interventions [133]. Advanced design, careful implementation and
continuous monitoring of performance are key requirements for
delivering meaningful decision support that meets the grand challenges of contemporary, high-quality healthcare [73].
Acknowledgments
This work was funded by the U.S. Office of the National Coordinator (ONC) for Health Information Technology, through contract
HHSP23320095649WC, task order HHSP23337009T. We also want
to thank Dr. Jonathan Teich for his insightful comments on an earlier draft.
References
[1] Blumenthal D, Glaser J. Information technology comes to medicine. N Engl J
Med 2007;356(24):2527–34.
[2] Osheroff JA, Teich JM, Middleton B, Steen EB, Wright A, Detmer DE. A roadmap
for national action on clinical decision support. J Am Med Inform Assoc
2007;14(2):141–5.
[3] Schiff GD, Bates DW. Can electronic clinical documentation help prevent
diagnostic errors? N Engl J Med 2010;362(12):1066–9.
[4] Goldzweig CL, Towfigh A, Maglione M, Shekelle PG. Costs and benefits of
health information technology: new trends from the literature. Health Aff
(Millwood) 2009;28(2):w282–93.
[5] Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, Roth E, et al. Systematic
review: impact of health information technology on quality, efficiency, and
costs of medical care. Ann Intern Med 2006;144(10):742–52.
[6] Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice
using clinical decision support systems: a systematic review of trials to
identify features critical to success. Br Med J 2005;330(7494):765.
[7] Bates DW, Gawande AA. Improving safety with information technology. N
Engl J Med 2003;348(25):2526–34.
[8] Hunt D, Haynes R, Hanna S, Smith K. Effects of computer-based clinical
decision support systems on physician performance and patient outcomes: a
systematic review. J Am Med Assoc 1998;280:1339–46.
[9] Blumenthal D. Launching HITECH. N Engl J Med 2010;362(5):382–5.
[10] Office of the Secretary. Health information technology: initial set of
standards, implementation specifications, and certification criteria for
electronic health record technology. Washington (DC): Department of
Health and Human Services; 2010 [45 CFR Part 170].
[11] Bates DW, Kuperman GJ, Wang S, Gandhi TK, Kittler A, Volk LA, et al. Ten
commandments for effective clinical decision support: making the practice of
evidence-based medicine a reality. J Am Med Inform Assoc
2003;10(6):523–30.
[12] Osheroff JA, Pifer EA, Teich JM, Sittig DF, Jenders RA. Improving outcomes
with clinical decision support: an implementer’s guide. 2nd ed. Chicago
(IL): Healthcare Information and Management Systems Society; 2011.
[13] Paterno M, Maviglia SM, Gorman PN, Seger DL, Yoshida E, Seger AC, et al.
Tiering drug–drug interaction alerts by severity increases compliance rates. J
Am Med Inform Assoc 2009;16(1):40–6.
[14] Koppel R, Kreda DA. Healthcare IT usability and suitability for clinical needs:
challenges of design, workflow, and contractual relations. Stud Health
Technol Inform 2010;157:7–14.
[15] Smelcer JB, Miller-Jacobs H, Kantrovich L. Usability of electronic medical
records. J Usabil Stud 2009;4(2):70–84.
[16] Greenes RA. Features of computer-based clinical decision support. In:
Greenes RA, editor. Clinical decision support: the road
ahead. Boston: Elsevier Academic Press; 2007. p. 79–108.
[17] Kuperman GJ, Bobb A, Payne TH, Avery AJ, Gandhi TK, Burns G, et al.
Medication-related clinical decision support in computerized provider order
entry systems: a review. J Am Med Inform Assoc 2007;14(1):29–40.
[18] Perreault LE, Metzger JB. A pragmatic framework understanding clinical
decision support. J Healthc Inf Manag 1999;13(2):5–21.
[19] Liaw ST, Pradhan M. Clinical decision support implementations. In: Hovenga
EJS, Kidd MR, Garde S, editors. Health informatics: an overview. Washington
(DC): IOS Press; 2010. p. 296–311.
[20] Chaffee BW, Zimmerman CR. Developing and implementing clinical decision
support for use in a computerized prescriber-order-entry system. Am J Health
Syst Pharm 2010;67(5):391–400.
[21] Garg A, Adhikari N, McDonald H, Rosas-Arellano M, Devereaux P, Beyene J,
et al. Effects of computerized clinical decision support systems on
practitioner performance and patient outcomes: a systematic review. J Am
Med Assoc 2005;293:1223–38.
[22] Li SYW, Magrabi F, Coiera E. A systematic review of the psychological
literature on interruption and its patient safety implications. J Am Med
Inform Assoc 2011;19(1):6–12.
[23] Glassman PA, Simon B, Belperio P, Lanto A. Improving recognition of drug
interactions: benefits and barriers to using automated drug alerts. Med Care
2002;40(12):1161–71.
[24] Westbrook JI, Coiera EW, Dunsmuir WT, Brown BM, Kelk N, Paoloni R, et al.
The impact of interruptions on clinical task completion. Qual Saf Health Care
2010;19(4):284–9.
[25] Zhang J. Representations of health concepts: a cognitive perspective. J Biomed
Inform 2002;35(1):17–24.
[26] Zhang J. A representational analysis of relational information displays. Int J
Hum–Comput Stud 1996;45(1):59–74.
[27] Saleem JJ, Russ AL, Sanderson P, Johnson TR, Zhang J, Sittig DF. Current
challenges and opportunities for better integration of human factors research
with development of clinical information systems. Yearb Med Inform
2009:48–58.
[28] HIMSS Usability Task Force. Defining and testing EMR usability: principles
and proposed methods of EMR usability evaluation and rating; 2009.
[29] Schulman J, Kuperman GJ, Kharbanda A, Kaushal R. Discovering how to think
about a hospital patient information system by struggling to evaluate it: a
committee’s journal. J Am Med Inform Assoc 2007;14(5):537–41.
[30] Berner ES. Clinical decision support systems: state of the art. Rockville, MD;
2009.
[31] McDonnell C, Werner K, Wendel L. Electronic health record usability: vendor
practices and perspectives. Rockville, MD; 2010 [No. 09(10)-0091-3-EF].
[32] Greenes RA. Clinical decision support: the road ahead. Boston: Elsevier
Academic Press; 2007.
[33] Shneiderman B. Designing the user interface: strategies for effective human–
computer-interaction. 4th ed. Reading (MA): Addison Wesley Longman;
2004.
[34] Schumacher RM, Webb JM, Johnson KR. How to select an electronic health
record system that healthcare professionals can use. IL: Oakbrook Terrace;
2009.
[35] Microsoft Health. Common User Interface (CUI). .
[36] Schumacher RM, Lowry SZ. NIST Guide to the processes approach for
improving the usability of electronic health records. Washington
(DC): National Institute of Standards and Technology; 2010 [NISTIR 7741].
[37] Lowry SZ, Quinn MT, Ramaiah M, Schumacher RM, Patterson ES, North R,
et al. Technical evaluation, testing and validation of the usability of electronic
health records. National Institute of Standards and Technology; 2012 [NISTIR
7804].
[38] Agarwal R, Anderson C, Crowley K, Kannan P. Understanding development
methods from other industries to improve the design of consumer health IT:
Background report. Rockville, MD; 2011 [11-0065-EF].
[39] Armijo D, McDonnell C, Werner K. Electronic health record usability:
evaluation and use case framework. Rockville, MD; 2009 [AHRQ Publication
No. 09(10)-0091-1-EF].
[40] Armijo D, McDonnell C, Werner K. Electronic health record usability:
interface design considerations. Rockville, MD; 2009 [AHRQ Publication No.
09(10)-0091-2-EF].
[41] Osheroff JA. Improving medication use and outcomes with clinical decision
support: a step-by-step guide. Healthcare Information and Management
Systems Society Mission; 2009.
[42] HIMSS Usability Task Force. Promoting usability in health organizations:
initial steps and progress toward a healthcare usability maturity model;
2011.
[43] Committee on Patient Safety and Health Information Technology; Institute of
Medicine. Health IT and patient safety: building safer systems for better
care. Washington (DC): The National Academies Press; 2011.
[44] Wears RL, Cook RI, Perry SJ. Automation, interaction, complexity, and failure:
a case study. Reliab Eng Syst Saf 2006;91(12):1494–501.
1214 J. Horsky et al. / Journal of Biomedical Informatics 45 (2012) 1202–1216[45] Magrabi F, Ong MS, Runciman W, Coiera E. An analysis of computer-related
patient safety incidents to inform the development of a classification. J Am
Med Inform Assoc 2010;17(6):663–70.
[46] Zhang J, Patel VL, Johnson TR, Shortliffe EH. A cognitive taxonomy of medical
errors. J Biomed Inform 2004;37(3):193–204.
[47] Holden RJ. Cognitive performance-altering effects of electronic medical
records: an application of the human factors paradigm for patient safety.
Cogn Technol Work 2011;13(1):11–29.
[48] Bates DW, Teich JM, Lee J, Seger DL, Kuperman GJ, Ma’Luf N, et al. The impact
of computerized physician order entry on medication error prevention. J Am
Med Inform Assoc 1999;6(4):313–21.
[49] Horsky J, Kuperman GJ, Patel VL. Comprehensive analysis of a medication
dosing error related to CPOE. J Am Med Inform Assoc 2005;12(4):377–82.
[50] FitzHenry F, Peterson JF, Arrieta M, Waitman LR, Schildcrout JS, Miller RA.
Medication administration discrepancies persist despite electronic ordering. J
Am Med Inform Assoc 2007;14(6):756–64.
[51] Koppel R, Wetterneck T, Telles JL, Karsh B-T. Workarounds to barcode
medication administration systems: their occurrences, causes, and threats to
patient safety. J Am Med Inform Assoc 2008;15(4):408–23.
[52] McDonald CJ. Computerization can create safety hazards: a bar-coding near
miss. Ann Intern Med 2006;144(7):510–6.
[53] Nebeker JR, Hoffman JM, Weir CR, Bennett CL, Hurdle JF. High rates of adverse
drug events in a highly computerized hospital. Arch Intern Med
2005;165(10):1111–6.
[54] Eslami S, Abu-Hanna A, de Keizer NF, de Jonge E. Errors associated with
applying decision support by suggesting default doses for aminoglycosides.
Drug Saf 2006;29(9):803–9.
[55] Shulman R, Singer M, Goldstone J, Bellingan G. Medication errors: a
prospective cohort study of hand-written and computerised physician
order entry in the intensive care unit. Crit Care 2005;9(5):R516–21.
[56] Koppel R, Metlay JP, Cohen A, Abaluck B, Localio AR, Kimmel SE, et al. Role of
computerized physician order entry systems in facilitating medication errors.
J Am Med Assoc 2005;293(10):1197–203.
[57] van der Sijs H, van Gelder T, Vulto A, Berg M, Aarts J. Understanding handling
of drug safety alerts: a simulation study. Int J Med Inf 2010;79(5):361–9.
[58] Wetterneck TB, Walker JM, Blosky MA, Cartmill SR, Hoonakker P, Johnson MA,
et al. Factors contributing to an increase in duplicate medication order
errors after CPOE implementation. J Am Med Inform Assoc
2011;18(6):774–82.
[59] US Army. Manpower and personnel integration (MANPRINT) in the materiel
acquisition process. Department of Defense, Washington, DC; 2001 [Army
Regulation 602-2].
[60] O’Hara JM, Higgins JC, Persensky JJ, Lewis PM, Bongarra JP. Human factors
engineering program review model. Washington (DC): Office of Nuclear
Regulatory Research; 2004 [NUREG-0711, Rev. 2].
[61] Ahlstrom V, Longo K. Human factors design standard: human interface
guiedlines. Washington (DC): Federal Aviation Administration; 2003 [HFSTD-001].
[62] Wickens CD, Gordon SE, Liu Y. An introduction to Human Factors Engineering.
2nd ed. Upper Saddle River [NJ]: Pearson Prentice Hall; 2004.
[63] HIMSS. Interoperability definition and background; 2005.
[64] Nielsen J. Usability engineering. Boston: Academic Press; 1993.
[65] Wiklund ME. Software user interfaces. In: Weinger MB, Gardner-Bonneau D,
Wiklund ME, Kelly LM, editors. Handbook of human factors in medical device
design. Boca Raton (FL): CRC Press; 2011. p. 425–70.
[66] Human Factors Research and Engineering Group. Baseline requirements for
color use in air traffic control displays. Washington (DC): US Department of,
Transportation; 2007 [HF-STD-002].
[67] Nielsen J, Loranger H. Prioritizing web usability. Berkeley (CA): New Riders;
2006.
[68] Bobb A, Gleason K, Husch M, Feinglass J, Yarnold PR, Noskin GA. The
epidemiology of prescribing errors: the potential impact of computerized
prescriber order entry. Arch Intern Med 2004;164(7):785–92.
[69] Varonen H, Kortteisto T, Kaila M. What may help or hinder the
implementation of computerized decision support systems (CDSSs): a focus
group study with physicians. Fam Pract 2008;25(3):162–7.
[70] Trafton J, Martins S, Michel M, Wang D, Tu S, Clark D, et al. Designing
an automated clinical decision support system to match clinical practice
guidelines for opioid therapy for chronic pain. Implement Sci 2010;5(1):26.
[71] Alexander GL. Issues of trust and ethics in computerized clinical decision
support systems. Nurs Adm Q 2006;30(1):21–9.
[72] Ahearn MD, Kerr SJ. General practitioners’ perceptions of the pharmaceutical
decision-support tools in their prescribing software. Med J Aust
2003;179(1):34–7.
[73] Sittig DF, Wright A, Osheroff JA, Middleton B, Teich JM, Ash JS, et al. Grand
challenges in clinical decision support. J Biomed Inform 2008;41(2):387–92.
[74] Weingart SN, Toth M, Sands DZ, Aronson MD, Davis RB, Phillips RS.
Physicians’ decisions to override computerized drug alerts in primary care.
Arch Intern Med 2003;163(21):2625–31.
[75] Sequist TD, Gandhi TK, Karson AS, Fiskio JM, Bugbee D, Sperling M, et al. A
randomized trial of electronic clinical reminders to improve quality of care
for diabetes and coronary artery disease. J Am Med Inform Assoc
2005;12(4):431–7.
[76] Walter Z, Lopez MS. Physician acceptance of information technologies: role of
perceived threat to professional autonomy. Decis Support Syst
2008;46(1):206–15.
[77] Vashitz G, Meyer J, Parmet Y, Peleg R, Goldfarb D, Porath A, et al. Defining and
measuring physicians’ responses to clinical reminders. J Biomed Inform
2009;42(2):317–26.
[78] Morris AH. Developing and implementing computerized protocols for
standardization of clinical decisions. Ann Intern Med 2000;132(5):373–83.
[79] van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in
computerized physician order entry. J Am Med Inform Assoc
2006;13(2):138–47.
[80] Stead WW, Lin H. National Research Council (US) committee on engaging the
computer science research community in health care informatics.
Computational technology for effective health care: immediate steps and
strategic directions. Washington (DC): National Academies Press; 2009.
[81] Sittig DF, Wright A, Simonaitis L, Carpenter JD, Allen GO, Doebbeling BN, et al.
The state of the art in clinical knowledge management: an inventory of tools
and techniques. Int J Med Inf 2010;79(1):44–57.
[82] Kuperman GJ, Gandhi TK, Bates DW. Effective drug–allergy checking:
methodological and operational issues. J Biomed Inform 2003;36(1–2):70–9.
[83] Kuperman G, Marston E, Paterno MD, Rogala J, Plaks N, Hanson C, et al.
Creating an enterprise-wide allergy repository at partners healthcare system.
AMIA annual fall, symposium; 2003. p. 376–80.
[84] Hsieh TC, Kuperman GJ, Jaggi T, Hojnowski-Diaz P, Fiskio JM, Williams DH,
et al. Characteristics and consequences of drug allergy alert overrides in a
computerized physician order entry system. J Am Med Inform Assoc
2004;11(6):482–91.
[85] Zimmerman CR, Chaffee BW, Lazarou J, Gingrich CA, Russell CL, Galbraith M,
et al. Maintaining the enterprisewide continuity and interoperability of
patient allergy data. Am J Health Syst Pharm 2009;66(7):671–9.
[86] Kuperman GJ, Teich JM, Gandhi TK, Bates DW. Patient safety and
computerized medication ordering at Brigham and Women’s Hospital. Jt
Comm J Qual Improv 2001;27(10):509–21.
[87] Goldberg SI, Shubina M, Niemierko A, Turchin A. A weighty problem:
identification, characteristics and risk factors for errors in EMR data. AMIA
Annu Symp Proc 2010;2010:251–5.
[88] Haerian K, McKeeby J, Dipatrizio G, Cimino JJ. Use of clinical alerting to
improve the collection of clinical research data. AMIA Annu Symp Proc
2009;2009:218–22.
[89] Broverman CA, Schlesinger JM, Sperzel WD, Kapusnik-Uner J. The future of
knowledge-based components in the electronic health record. Stud Health
Technol Inform 1998;52(Pt 1):457–61.
[90] Anonymous. Eclipsys introduces open platform to transform health IT and
drive electronic health record adoption. Healthcare technology; 2010;
.
[91] Cimino JJ, Patel VL, Kushniruk AW. Studying the human–computerterminology interface. J Am Med Inform Assoc 2001;8(2):163–73.
[92] Tinker MA. Bases for effective reading. Minneapolis: University of Minnesota
Press; 1965.
[93] The Joint Commission. List of look-alike, sound-alike, drugs; 2010.
[94] The Joint Commission. The official do not use list; 2010.
[95] US Food and drug administration. Name differentiation project; 2001.
.
[96] Filik R, Purdy K, Gale A, Gerrett D. Drug name confusion: evaluating the
effectiveness of capital (‘‘Tall Man’’) letters using eye movement data. Soc Sci
Med 2004;59(12):2597–601.
[97] ISMP. ISMP updates its list of drug name pairs with TALL man letters. ISMP
medication safety alert, vol. 15, no. 23; 2010.
[98] Microsoft Corporation. Windows user experience interaction guidelines;
2010.
[99] The Joint Commission. Control of concentrated electrolyte, solutions; 2007.
[100] Khajouei R, de Jongh D, Jaspers MW. Usability evaluation of a computerized
physician order entry for medication ordering. Stud Health Technol Inform
2009;150:532–6.
[101] Horsky J, Kaufman DR, Patel VL. Computer-based drug ordering: evaluation of
interaction with a decision-support system. Medinfo 2004;11(Pt 2):1063–7.
[102] Feldstein A, Simon SR, Schneider J, Krall M, Laferriere D, Smith DH, et al. How
to design computerized alerts to safe prescribing practices. Jt Comm J Qual
Saf 2004;30(11):602–13.
[103] Zhang J, Walji MF. TURF: toward a unified framework of EHR usability. J
Biomed Inform 2011;44(6):1056–67.
[104] Schiff GD, Klass D, Peterson JF, Shah G, Bates DW. Linking laboratory and
pharmacy: opportunities for reducing errors and improving care. Arch Intern
Med 2003;163(8):893–900.
[105] Miller RA, Waitman LR, Chen S, Rosenbloom ST. The anatomy of decision
support during inpatient care provider order entry (CPOE): empirical
observations from a decade of CPOE experience at Vanderbilt. J Biomed
Inform 2005;38(6):469–85.
[106] Payne TH, Hoey PJ, Nichol P, Lovis C. Preparation and use of preconstructed
orders, order sets, and order menus in a computerized provider order entry
system. J Am Med Inform Assoc 2003;10(4):322–9.
[107] Gardner RM. Computerized clinical decision-support in respiratory care.
Respir Care 2004;49(4):378–86.
[108] Magid SK, Pancoast PE, Fields T, Bradley DG, Williams RB. Employing clinical
decision support to attain our strategic goal: the safe care of the surgical
patient. J Healthc Inf Manag 2007;21(2):18–25.
[109] Rosenberg SN, Sullivan M, Juster IA, Jacques J. Overrides of medication alerts
in ambulatory care. Arch Intern Med 2009;169(14):1337.
J. Horsky et al. / Journal of Biomedical Informatics 45 (2012) 1202–1216 1215[110] Shah NR, Seger AC, Seger DL, Fiskio JM, Kuperman GJ, Blumenfeld B, et al.
Improving acceptance of computerized prescribing alerts in ambulatory care.
J Am Med Inform Assoc 2006;13(1):5–11.
[111] Chertow GM, Lee J, Kuperman GJ, Burdick E, Horsky J, Seger DL, et al. Guided
medication dosing for inpatients with renal insufficiency. J Am Med Assoc
2001;286(22):2839–44.
[112] Wipfli R, Lovis C. Alerts in clinical information systems: building frameworks
and prototypes. Stud Health Technol Inform 2010;155:163–9.
[113] McFall RM, Treat TA. Quantifying the information value of clinical
assessments with signal detection theory. Annu Rev Psychol
1999;50:215–41.
[114] Ong M-S, Coiera EW. Evaluating the effectiveness of clinical alerts: a signal
detection approach. AMIA annual fall, symposium; 2011. p. 1036–44.
[115] Graham KC, Cvach M. Monitor alarm fatigue: standardizing use of
physiological monitors and decreasing nuisance alarms. Am J Crit Care
2010;19(1):28–34 [quiz 35].
[116] Classen DC, Phansalkar S, Bates DW. Critical drug–drug interactions for use in
electronic health records systems with computerized physician order entry:
review of leading approaches. J Patient Saf 2011;7(2):61–5.
[117] Ko Y, Ararca J, Malone DC, Dare DC, Geraets D, Houranieh A, et al.
Practitioners’ views on computerized drug–drug interaction alerts in the
VA system. J Am Med Inform Assoc 2007;14(1):56–64.
[118] NASA. NASA Color usage web page; 2004.
[March 22].
[119] Sears A, Jacko JA. Human–computer interaction. Designing for diverse users
and domains. Boca Raton: CRC Press; 2009.
[120] Spalding JA. Colour vision deficiency in the medical profession. Br J Gen Pract
1999;49(443):469–75.
[121] Rigden C. Now you see it, now you don’t. IEEE Comput 2002;35(7):104–5.
[122] McCoy AB, Waitman LR, Lewis JB, Wright JA, Choma DP, Miller RA, et al. A
framework for evaluating the appropriateness of clinical decision support
alerts and responses. J Am Med Inform Assoc 2012;19(3):346–52.
[123] van den Bemt PM, Egberts AC, Lenderink AW, Verzijl JM, Simons KA, van der
Pol WS, et al. Risk factors for the development of adverse drug events in
hospitalized patients. Pharm World Sci 2000;22(2):62–6.
[124] McDonald CJ, Wilson GA, McCabe Jr GP. Physician response to computer
reminders. J Am Med Assoc 1980;244(14):1579–81.
[125] Dean B, Schachter M, Vincent C, Barber N. Causes of prescribing errors in
hospital inpatients: a prospective study. Lancet 2002;359(9315):1373–8.
[126] Miller RA, Gardner RM, Johnson KB, Hripcsak G. Clinical decision support and
electronic prescribing systems: a time for responsible thought and action. J
Am Med Inform Assoc 2005;12(4):403–9.
[127] Gardner RM, Evans RS. Using computer technology to detect, measure, and
prevent adverse drug events. J Am Med Inform Assoc 2004;11(6):535–6.
[128] Tamblyn R, Huang A, Taylor L, Kawasumi Y, Bartlett G, Grad R, et al. A
randomized trial of the effectiveness of on-demand versus computertriggered drug decision support in primary care. J Am Med Inform Assoc
2008;15(4):430–8.
[129] Raschke RA, Gollihare B, Wunderlich TA, Guidry JR, Leibowitz AI, Peirce JC,
et al. A computer alert system to prevent injury from adverse drug events:
development and evaluation in a community teaching hospital. J Am Med
Assoc 1998;280(15):1317–20.
[130] Sengstack PP. CPOE configuration to reduce medication errors: a literature
review on the safety of CPOE systems and design recommendations. J Healthc
Inf Manag 2010;24(4):26–34.
[131] Patel VL, Zhang J. Patient safety in health care. In: Durso FT, Nickerson RS,
editors. Handbook of applied cognition. Hoboken (NJ): J. Wiley; 2007. p.
307–31.
[132] Schumacher RM, Lowry SZ. Customized common industry format template
for electronic health record usability testing. Washington (DC): National
Institute of Standards and Technology; 2010 [NISTIR 7742].
[133] Zhang J, Patel VL, Johnson TR. Medical error: is the solution medical or
cognitive? J Am Med Inform Assoc 2002;9(6 Suppl.):S75–7.
1216 J. Horsky et al. / Journal of Biomedical Informatics 45 (2012) 1202–1216