Appl Clin Inform 2023; 14(05): 843-854
DOI: 10.1055/a-2150-8523
Research Article

Governance of Electronic Health Record Modification at U.S. Academic Medical Centers

Akshay Ravi
1   Department of Medicine, University of California, San Francisco, San Francisco, California, United States
,
Simone Arvisais-Anhalt
2   Department of Laboratory Medicine, University of California, San Francisco, San Francisco, California, United States
,
Benjamin Weia
1   Department of Medicine, University of California, San Francisco, San Francisco, California, United States
,
Raman Khanna
1   Department of Medicine, University of California, San Francisco, San Francisco, California, United States
,
Julia Adler-Milstein
1   Department of Medicine, University of California, San Francisco, San Francisco, California, United States
,
Andrew Auerbach
1   Department of Medicine, University of California, San Francisco, San Francisco, California, United States
› Institutsangaben
Funding This work was the product of an unfunded trainee project.
 

Abstract

Objectives A key aspect of electronic health record (EHR) governance involves the approach to EHR modification. We report a descriptive study to characterize EHR governance at academic medical centers (AMCs) across the United States.

Methods We conducted interviews with the Chief Medical Information Officers of 18 AMCs about the process of EHR modification for standard requests. Recordings of the interviews were analyzed to identify categories within prespecified domains. Responses were then assigned to categories for each domain.

Results At our AMCs, EHR requests were governed variably, with a similar number of sites using quantitative scoring systems (7, 38.9%), qualitative systems (5, 27.8%), or no scoring system (6, 33.3%). Two (11%) organizations formally review all requests for their impact on health equity. Although 14 (78%) organizations have trained physician builders/architects, their primary role was not for EHR build. Most commonly reported governance challenges included request volume (11, 61%), integrating diverse clinician input (3, 17%), and stakeholder buy-in (3, 17%). The slowest step in the process was clarifying end user requests (14, 78%). Few leaders had identified metrics for the success of EHR governance.

Conclusion Governance approaches for managing EHR modification at AMCs are highly variable, which suggests ongoing efforts to balance EHR standardization and maintenance burden, while dealing with a high volume of requests. Developing metrics to capture the performance of governance and quantify problems may be a key step in identifying best practices.


#

Background and Significance

Governance of information technology (IT) systems is the process by which an enterprise steers effective and efficient use of IT systems to achieve institutional goals.[1] [2] As the U.S. health care system digitized, the first wave of health IT governance efforts and associated research focused on electronic health record (EHR) implementation and establishing oversight for enterprise-wide information systems project management including the funding and approval of new integrated hardware or software systems.[3] As EHRs have matured, an emergent, pressing need in governance is the approach to managing requests for modifying EHR systems to meet patient and provider needs as well as overall enterprise goals. Given the complex and interconnected nature of enterprise EHRs, managing modification requests requires balancing the differing priorities of frontline users, analysts and IT teams, leaders at various levels within the organization, and external stakeholders, in the context of finite EHR resources. These priorities are shaped by diverse organizational goals including patient safety, quality of care, regulatory compliance, and revenue capture, and constrained by practical limitations due to shortages in the nursing/provider informaticist workforce.

While there is a robust literature examining health IT governance as it relates to implementation,[4] [5] few studies focus on the major current challenge of managing the deluge of requests for modifying complex EHRs.[2] [3] [6] [7] [8] [9] [10] Although organizations like Healthcare Information and Management Systems Society may individually collect data about health record governance for their Electronic Medical Record Adoption Model staging certifications, these data are not publicly available for review or comparison.[11] Existing work describes single-institution approaches or panel reflections on the EHR modification process; however, this work does not systematically characterize and compare approaches.[12] [13] [14] [15] Indeed, we are not aware of any key performance index (KPI) or other metrics to compare strategies, which may be key to understanding best practices, and, ultimately, streamlining the process of improving patient care through the EHR.


#

Objectives

We sought to take the first step toward filling these gaps by characterizing the EHR modification process at 18 academic medical centers (AMCs) across the country as well as leaders' impressions of challenges and opportunities in IT governance at their sites.


#

Methods

Setting and Participants

We selected AMCs because of their relative similarities with regard to size, setting, institutional mission, health IT maturity, and anticipated complexity of EHR governance needs. Other settings like community hospital systems or county hospital systems were not included because they may have a different scope of priorities and scale of resources. AMCs were defined as health centers with affiliated residency training programs. We further restricted our cohort to 25 AMCs with Clinical Informatics fellowship programs for two reasons: first, AMCs with an informatics fellowship are more likely to have a more mature EHR system and IT resources; and second, clinical informatics fellows at these programs could facilitate connections with Chief Medical Information Officers (CMIOs) who might otherwise be inaccessible. We excluded county health systems, private for-profit health systems, community hospitals, and Veteran's Affairs medical centers.

We targeted CMIOs for interviews because this role would have the greatest overall insight into the EHR modification process. To mitigate against the possibility that CMIOs might be unfamiliar with the minutia of governance, we instructed CMIOs to delegate the interview to another staff member with more knowledge on the subject if appropriate, provided CMIOs the interview questions in advance to prepare for any specific details they may not be familiar with, and allowed CMIOs to gather necessary information and follow-up after the interview via email if there were any details they felt unequipped to answer in the moment. Interview outreach was conducted by email communication either directly or via hand-off through contacts within the American Medical Informatics Association of Clinical Informatics Fellows.


#

Protocol

Our interview protocol was developed in collaboration with experts in informatics, grouped into themes identified by individuals with experience in EHR governance and pretested with two physician–informaticists who were not connected to the project to ensure content validity. Interview questions were organized in the order in which requests might be handled. Each interview began with a standard scenario to focus the scope of the study:

“A physician at your institution has noticed that most other providers in their practice have not been ordering standard surveillance labs for patients admitted for inpatient administration of a particular chemotherapy. They would like to create a new orderset that groups and pre-selects these labs in addition to the drug.”

We then asked structured and open-ended questions organized into the following areas as they pertain to the governance process impacting their primary/core AMC: (1) the process for requesting a change to the EHR, (2) prioritization and evaluation of each request, (3) building the request and communication with the requester on the status of the build, and (4) postbuild monitoring. We also asked each respondent to provide a summary assessment of the strengths and weakness of the organization's EHR modification process, along with supporting documents like charters and organizational charts. Given the growing attention to socioeconomic disparities in health care as well as a recognition of the role that the EHR can play in this area, health equity considerations were specifically evaluated to better understand the ways organizations may or may not consider equity during the EHR modification process.[16] [17]

Together, 16 dimensions of EHR governance were addressed. Our full protocol is included in [Supplemental Table S1] (available in online version). Of note, not all interview questions correlated directly to a single dimension of governance as some questions were open-ended ([Supplemental Table S1], available in online version, question 21), some questions were aimed at soliciting supplementary documentation ([Supplemental Table S1], available in online version, questions 19–20), and some questions were ultimately not included in the final analysis because of an inability to standardize responses across organizations ([Supplemental Table S1], available in online version, questions 12–14).

Interviews were conducted and recorded by three separate interviewers via videoconferences between February 2022 and July 2022. Recordings were transcribed and analyzed to identify specific categories of responses within each predefined interview dimension.

Identification of categories of responses included a three-step process to ensure consistency of interpretation. First, the primary interviewer summarized each dimension into one or more categories based on the interview transcript, followed by a second review of the transcript by a different interviewer (not on the given interview) to flag any disagreements. Then, any discrepancies between the first two individuals were reconciled after a review of the original video recording and consensus from all three study interviewers. We selected quotes for each dimension that illustrated the different categories, which we integrated into our results reporting. Finally, after all potential categories of responses for a dimension of governance were identified, each interview transcript was reviewed to assign one or more categories per dimension.

For some dimensions (“Method of Intake,” “Type of Scoring System,” “Health Equity Consideration,” etc.), categories of responses were mutually exclusive, and therefore, each AMC could fit into a single category; for other dimensions (“Members of Governance Team,” “Elements of Scoring System,” “Challenges to Governance,” etc.), each AMC could be assigned to multiple categories.

For example, for the dimension Type of Scoring system ([Table 1]), the three-step review process resulted in a categorization of how each AMC approached the governance task: (1) a quantitative scoring system, (2) qualitative scoring system, or (3) no scoring system. We, then, assigned each AMC to the category that best represented the description of their approach.

Table 1

Responses to interview questions about the process of electronic health record governance

Section of interview

Dimension of governance

Categories of response

No. of AMCs

n (%)

Request intake

Method of intake

Online portal

15 (83.3)

Service now

11 (73.3)[a]

Direct communication (email/phone alone)

3 (16.7)

Request evaluation

Type of scoring system

Quantitative

7 (38.9)

Qualitative

5 (27.8)

No scoring system

6 (33.3)

Previously had scoring system

Yes

5 (45)[b]

Health equity consideration

Formally considered

2 (11.1)

Informally considered

9 (50)

Not considered

7 (38.9)

Top 3 members of governance team

Physician informaticists

18 (100)

Nursing informaticists

17 (94)

IT staff

17 (94)

Top 3 elements of scoring system

Implementation time

9 (75)[c]

Patient safety

8 (67.7)[c]

Scale of providers, hospitals, areas impacted

8 (67.7)[c]

Build and build communication

Method of communication with requesters

Online portal

14 (77.8)

Service now

12 (85.7)[a]

Email/phone alone

3 (16.7)

No standard communication

1 (5.6)

Use of SLAs

Break–fix response time

7 (38.9)

Request to build time

5 (27.8)

None

8 (44.4)

Trained physician builders/architects

Yes

14 (77.8)

Regularly use physician builders/architects for builds

Yes

0 (0)

Monitoring and feedback

Monitoring builds

Standard monitoring

11 (61.1)

Conditional monitoring

5 (27.8)

Regular review of ordersets

5 (27.8)

No monitoring

3 (16.7)

Feedback on builds

General feedback

11 (61.1)

 In-process feedback

2 (11.1)

Surveys

8 (44.4)

EHR demonstration “road shows”

1 (5.6)

Specific feedback

10 (55.6)

In-process feedback

8 (44.4)

Individual/group solicitation

2 (11.1)

No channels for feedback

2 (11.1)

Summary

Top 3 challenges to governance

Supply/demand

11 (61.1)

Not enough staff

4 (36.3)[d]

Excess of requests

6 (54.5)[d]

Unspecified

1 (9.1)[d]

Diverse clinician representation/input

3 (16.7)

Stakeholder buy-in to governance process

3 (16.7)

Top 3 strengths

Experience/institutional memory

8 (44.4)

Relationships with SMEs/users

4 (22.2)

relationships with leadership and IT

3 (16.7)

Top 3 rate-limiting steps

Clarifying requests

14 (77.8)

Negotiation between Stakeholders

3 (16.7)

Governance process

2 (11.1)

Measures of the success of governance

Outcome metrics

11 (61.1)

Process metrics

13 (72.2)

Not measured

4 (30.8)

Abbreviations: AMC, academic medical center; EHR, electronic health record; IT, information technology; SLA, service level agreement.


a Percentage is out of the AMCs that use an online portal.


b Percentage is out of the AMCs that use a qualitative system or no scoring system.


c Percentage is out of the AMCs that use a qualitative or quantitative scoring system.


d Percentage is out of the AMCs that reported supply and demand as a challenge to governance.


We used data from the American Hospital Association Data & Insights survey and IT supplement survey to characterize participating hospitals.[18] All study procedures were approved by the University of California, San Francisco (UCSF) Institutional Review Board. Qualitative data analysis and summarization was conducted on Atlas.ti version 22 and R version 4.1.2.


#
#

Results

Of the 25 CMIOs invited, 18 (72%) responded and completed an interview ([Table 2]). The majority (n = 14, 78%) of AMCs were classified as Not-for-Profit and used an EHR from Epic Systems (n = 13, 72%). Geographic representation included AMCs from the West, Midwest, Southwest, Southeast, and Northeast of the United States. The median annual hospital admissions and outpatient visits were 33,100 (interquartile range [IQR]: 27,900–43,400) and 973,100 (IQR: 534,800–1,306,700), respectively. Interview results and sample illustrative quotes are summarized by section in [Tables 1] and [3], respectively.

Table 2

Demographic features of academic medical centers interviewed

Feature

Number of AMCs, n = 18

Type of ownership

 Not-for-profit system

14 (78%)

 State/government hospital system

4 (22%)

EHR type

 Epic systems

13 (72%)

 Cerner corporation

3 (17%)

 Other

2 (11%)

Geographic distribution

 West

5 (28%)

 Midwest

5 (28%)

 Northeast

5 (28%)

 Southwest

2 (11%)

 Southeast

1 (5%)

Annual hospital admissions, median (IQR)

33,059.5 (27,920.75–43,355.25)

Annual outpatient visits, median (IQR)

973,100.5 (534,814.8–1,306,689.5)

Abbreviations: AMC, academic medical center; EHR, electronic health record; IQR, interquartile range.


Table 3

Illustrative quotes by area of interview

Section of interview/dimension

Categories of responses

Sample quote

Request intake/method of intake

Online portal

“…any end user could do this, generally they would put in a service ticket to our service, which would send an email to our incident management system”

Direct communication (email/phone alone)

“…in a lot of cases, people know who [the SME] is and they'll go through them. In other cases…they'll reach out to one of our provider informaticists”

Request evaluation/type of scoring system

Quantitative

“…And there's a prioritization rubric to score a given initiative based on things like complexity, cost to build, scalability. We tend not to prioritize things for Doctor so and so, or for a given nursing unit”

Qualitative

“Scoring systems seem to fail to capture the timeliness of some of the requests that are required…we rank prioritized based on a number of buckets…regulatory compliance…financial saving…enhanced revenue…patient safety or quality indicator…So there's this detailed prioritization that comes within the work groups…Ultimately…if we have too much ask…then [my EHR team manager] and I will further refine it. I'm sort of the endpoint arbitrarily…. I suspect it would be possible to score that out, but we tend to use a much more credibility and trust [based] methodology to try to stay very close to being aligned with what the strategic imperatives of the current state are for the organization”

No scoring system

“There's no explicit scoring system. All [prioritizations decisions are] judgments on the analysts' part”

Request evaluation/previously had scoring system

Yes

“We have attempted to use multiple different scoring systems, none of which we have found to be broadly and consistently helpful”

Request evaluation/health equity consideration

Formally considered

“One of the dimensions in our scoring system…is whether it promotes equity…part of the clinical oversight that we perform…[is to] assure that nothing worsens equity or that we are considering the equity dimensions of anything that we do”

Informally considered

“[Health equity is considered in prioritization] in the sense of our impact…We don't score health equity separately from impact”

Not considered

“[Health equity is] not typically used to differential. We get requests related to [health equity], but we put them through the same process…”

“I'd be hard pressed to think about ways in which health equity might be either positively or negatively affected by…the workflow. So in many cases, this is not relevant to the work…”

Build and build communication/method of communication with requesters

Online portal

“We have an explicit ticketing process that generates a trackable entity that is shared back to the requester. And if they want to actually dig in, we make it transparent so they can see the trail of how [the request] is progressing”

Email/phone alone

“[The requester is] going to be in communication with our analyst team…and [they are] going to let them know when [the request] is going into production and when it is going to be available”

No standard communication

“…[the requesters] have to go back to someone like myself and say, can you find out where this thing is in the queue”

Build and build communication/use of SLAs

Break–fix response time

“…there are service level agreements for responding to the customer and for handling the incident. For actual enhancement requests…I don't think they are directly applicable”

Request to build time

“…When we hit that final prioritization, that puts a target date [on the request]…which is when it's supposed to be in production”

None

“In terms of communication with customers regarding a build, I'm not sure that we [have SLAs]”

Build and build communication/regularly use physician builders/architects for builds

Yes

“The majority of the build is by analysts. We do have physician builders and some things that would not otherwise get built, the physician builders do”

“physician builders are a small cadre…[most] are certified but do very little build. Physician builders are very helpful to be advisers to the build analysts”

Monitoring and feedback/monitoring builds

Standard monitoring

“We have high level metrics and we do continuous alert optimization…there's [also] some sort of post move to production evaluation that happens”

Conditional monitoring

“Sometimes reporting the surveillance is part of the request…if there's an owner who can monitor how things are going, we build that in as part of their [request]”

Regular review of ordersets

“We keep data on how often ordersets are used and we have a process such that every orderset must be reviewed and revalidated by its clinical leadership owner…no less often than once every two years”

No monitoring

“We don't just routinely monitor [new requests] after they're built in live, other than testing that they are functioning correctly in production”

Monitoring and feedback/feedback on builds

General feedback: in-process feedback

“…there is a link in the EHR for you to submit a suggestion or make a request…that brings you to the [ticketing platform submission page]”

General feedback: surveys

“…every other year we survey all the physicians and we have a set of questions in this biannual physician survey, specifically around EHR”

General feedback: road shows

“…our team goes to more than 100 meetings, twice a year and we present [upcoming EHR updates]…and we ask for feedback…[and sometimes receive] feedback on EHR builds that might be [unrelated to the updates]…about things that are really not working for end users”

Specific feedback: in-process feedback

“…on alerts…there's a little button [with a link]…do you have feedback about this BPA? You click on it and there's a [survey software] link….We are going to put that in our ordersets as well”

Specific feedback: individual/group solicitation

“…we do solicit feedback…to make sure [the new build is] meeting the needs of the clinicians that are using it…[during] one on one or group follow up [sessions]”

No channels for feedback

“I don't think we really solicit much feedback on the build from end users”

Summary/top 3 challenges to governance

Supply/demand: not enough staff

“…The biggest challenge is…just having a large enough team, resources, project managers, analysts, which I know no one ever feels like they have enough”

Supply/demand: excess of requests

“…there is an endless appetite for a list of really good innovations that [could] be done and we just can't do them all”

Diverse clinician representation/input

“…one of [our biggest challenges] is getting active clinician input and involvement in EHR governance….And that's been a struggle for us because everybody's busy and attending committee meetings is not high on people's [priorities]”

Stakeholder buy-in to governance process

“We end up often pulled in many directions by different constituencies within the organization and find ourselves challenged to resolve that conflict”

Summary/top 3 strengths

Experience/institutional memory

“…we have a really great governance group that's well wired into the organization [such that] if they don't have the answer or the guidance, they know who we need to talk to”

Relationships with SMEs/users

“I think [our strength] is collaboration and working across teams where clinicians are driving the change….It's clinicians saying, this is what we want to do…now this is how we can incorporate quality,…this is how we can meet compliance, not the other way around….So the way we set it up allows [end users] to have their voices heard and have their ideas and requests get solutions”

relationships with leadership and IT

“…we have integrated well with the operational side so that we're not siloed…. those good relationships with operational leads [lets us] make decisions together and make the system [work]”

Summary/top 3 rate-limiting steps

Clarifying requests

“…having a clear understanding of the requirements is actually very difficult because requests come through [that say] 'let's fix this to make this better', but understanding what [the solution] is that would actually make that [problem] better can be very difficult. …we waste a lot of cycles just trying to get clear on the requirements”

Negotiation between stakeholders

“[It] …can be a problem when your informatics team says this is just too complicated. We have to talk to Cardiology, we have to go back to Pulmonary, we're not sure their request is clearly thought out. So that's how it gets to be very labor intensive”

Governance process

“…we don't have dedicated resources to manage governance. And so when we do the capital management process, …we have a significant amount of administrative for the capital process that we don't have for the new request process”

Summary/measures of the success of governance

Outcome metrics

“…we measure this in provider satisfaction through KLAS collaborative score and keeping that trend in an upward direction. But I think that we're now entering some conversations about really trying to capture the smaller things that provide value so that we can justify more funding for the provider informatics program, because we think the provider informatics program is what's driving this greater satisfaction with the EHR”

“If I hear noise and I hear complaints, then we make changes…but when [our governance] meetings are running smoothly and they're getting work done; they're building stuff and their attendance is not reducing, then it's successful”

“I don't think we formally measure [the success of governance]…It's really noise level and it comes in waves. The Innovation Center is frustrated with the governance process because it wants to go fast and break things…the governance process is frustrated with the Innovation Center because it wants to go fast and break things. So then we all sit down and talk about how we're all going to do better…[and] the noise goes away for a bit”

Process metrics

“…we are looking to see when [new builds] are completed and if…the prioritization structure is working….We are doing a manual tracking of the days to completion, and we're comparing [categories of prioritization] and using the governance state and complete ticket close date to calculate the average”

Not measured

“We don't actually measure the success of EHR governance, to my knowledge, so we have no metrics. So it's got me thinking about how one would actually do this”

Promising practices/unique request intake

User stories

“We'll talk to them and then redo their request with them as a user story, which basically says which type of user is it as a certain type of user, and then what they want, and then what the reason is, what value they plan to get from it”

Promising practices/unique scoring systems

Two-tiered scoring system

“We assign a designation of A (Safety), B (Regulatory), C (Revenue), or O (Other) for the first part of our priority methodology…and then [for] the second part we do an impact score…[which] allows an item to gather up points from all areas [including direct patient care, compliance, safety, financial impact, efficiency, patient satisfaction], even if it's not receiving a primary designation [in that area]”

Promising practices/unique user partnerships

Requester champions

“One of the things we do require…is that you own up. Not only that you want a modification to a good idea, but you're willing to champion that ongoing…somebody has to be there to answer the questions that come up as they start to think through what is this really asking…[and once the request is built we] then expect that the champion oversee…[and] own [the request]”

“[We] would typically want this enhancement to be linked to how would we measure it if we were going to approve it, because it's supposed to fix a problem and then expect that the champion…own[s] [the oversight] and assure that the metric improves. Now, what we don't do as well is centrally track all of that”

Abbreviations: BPA, best practice advisory; EHR, electronic health record; IT, information technology; SLA, service level agreement; SME, subject matter expert.


Request Intake

Requests were most often made through an online portal (n = 15, 83.3%), although some relied on direct communication to IT or an informaticist via email or phone. Of the ones that used an online portal, the majority used ServiceNow (Santa Clara, California, United States; n = 11, 73%). One unique method of request intake was to work with users to reframe requests into a user story: “We'll talk to them and then redo their request with them as a user story, which basically says which type of user is it as a certain type of user, and then what they want, and then what the reason is, what value they plan to get from it.”


#

Request Evaluation

Physician informaticists (n = 18, 100%), nursing informaticists (n = 17, 94%), and IT staff (n = 17, 94%) were most commonly reported as being parts of teams evaluating requests. Twelve (67%) reported using standard scoring systems to evaluate their requests. Scoring systems were classified as quantitative (n = 7, 38.9%) if they produced a numerical score and qualitative (n = 5, 27.8%) if they had clear criteria but did not require a numerical score. The most commonly reported categories considered in scoring systems were implementation time (n = 9, 75%), patient safety (n = 8, 67.7%), and the scale of providers/hospitals/areas impacted (n = 8, 67.7%). Of the 11 CMIOs that reported having a qualitative system or no scoring system, 45% (n = 5) further stated that they previously used a numerical scoring system that was ultimately abandoned. One unique scoring system incorporated a two-tiered methodology: “We assign a designation of A (Safety), B (Regulatory), C (Revenue), or O (Other) for the first part of our priority methodology…and then [for] the second part we do an impact score…[which] allows an item to gather up points from all areas [including direct patient care, compliance, safety, financial impact, efficiency, patient satisfaction], even if it's not receiving a primary designation [in that area].”

Two sites (11.1%) reported standards for reviewing each request's health equity impact; nine used informal methods to improve equity; the remaining sites had no established mechanism in place to evaluate health equity. One CMIO whose site did not have a standard process in place mentioned: “I'd be hard pressed to think about ways in which health equity might be either positively or negatively affected by…the workflow. So in many cases, this is not relevant to the work….”


#

Build and Build Communication

Most (14, 77.8%) of CMIOs reported having trained physician builders/architects on staff, but none of them were frequently (>5% of builds) used for building EHR modifications. One CMIO reported that even if not carrying out EHR build requests, “Physician builders are very helpful to be advisers to the build analysts.”

Most CMIOs reported using an online portal (n = 14, 77.8%) to communicate with the requester on the status of approved requests. Nearly half (8, 44%) of CMIOs reported that their system did not have service level agreements (SLAs) specifying a specific timeframe for communications with their requesters; SLAs were infrequently used for break–fix response times (n = 7, 38.9%) or estimated time from EHR modification request to build completion (n = 5, 27.8%).


#

Monitoring and Feedback

A majority reported some form of standard monitoring (n = 11, 61.1%), which was defined as a post hoc standardized evaluation of the usage of or user feedback about an EHR modification (i.e., How often the modification was used, user feedback about errors, etc.). Five (27.8%) performed ad hoc evaluation of the usage of an EHR modification based on the scenario/situation/requester. One unique approach to monitoring was shared responsibility with requester champions: “[We] would typically want this enhancement to be linked to how we would measure it if we were going to approve it, because it's supposed to fix a problem and then expect that the champion…own[s] [the oversight] and assure[s] that the metric improves.” CMIOs used a combination of survey-based general (e.g., broad EHR usefulness) feedback and specific feedback (e.g., channel for feedback built into an EHR task) about an individual EHR modification build.


#

Strengths and Weaknesses of EHR Governance Systems

Managing the mismatch between demand for EHR modifications and supply of IT resources to manage requests was a substantial concern (n = 11, 61.1%). Clarifying and understanding new requests was identified as the slowest or most the resource-intensive aspect of governance. Commonly reported strengths were related to the informatics team's experience and institutional memory, and strong relationships with subject matter experts, end users, and health system leadership.

Leaders reported using both process metrics (e.g., request volumes, time to completion) and outcome metrics (e.g., effects on patients or end users) to track governance success. Outcome metrics were less clearly defined and included several informal measures like lack of complaints or governance meeting voluntary attendance to indicate engagement and provider satisfaction: “If I hear noise and I hear complaints, then we make changes…but when [our governance] meetings are running smoothly and they're getting work done; they're building stuff and their attendance is not reducing, then it's successful.” However, the use of metrics to evaluate the impact of builds or the governance process itself was rare.


#
#

Discussion

Our cross-sectional study of large AMCs suggests that the approach to managing EHR modification requests is highly variable. Nonetheless, certain shared challenges exist, including the mismatch between demand for EHR modifications and supply of informatics time, and the clarification of details needed for new requests. Reasons for this variability in governance approaches remain unclear but may stem from a lack of best practices and KPIs that leaders and health systems might use to define successful governance for these modern challenges. We highlight unique solutions used by some AMCs and a proposal for an EHR governance KPI.

While prior published studies have described governance for implementing EHR systems, standing up IT projects, clinical decision support, and evaluating these systems for patient safety concerns, our study fills a key gap by examining the more recent governance need for the intake and management of requests for any EHR customization across organizations.[2] [3] [10] [12] [13] [14] [15] With this broader view, we find that despite discussions of governance for over two decades, CMIOs still view the optimal management of EHR modification requests to be a major challenge, which is demonstrated by the stark variability in governance practices across organizations.

While this heterogeneity in approaches may be due to differing needs across organizations, our finding that CMIOs still experience similar challenges in governance argue that these variations may more likely be because there are few data to describe best practices.[12] [19] This is further supported by our finding that nearly half of CMIOs who reported using qualitative evaluation systems or no longer used any scoring system had abandoned a previous quantitative approach, demonstrating that organizations are continuing to modify and experiment with their governance process. Optimization of the EHR modification process will require a better understanding of the landscape of approaches, styles, and strategies across a larger number of sites and a linking these programs' features to outcomes of governance. For example, one of our sites reported the use of a unique two-tiered scoring system that first assigns requests into categories based on organizational priority, and then uses a second tier to quantify the impact of requests within each category. Leaders at this system felt that this model could allow for quantitative prioritization, while also adapting to changing organizational needs, but further comparison of this system to other models in terms of effective request triage, personnel time, modification request turnaround time, and user satisfaction with EHR's operations and governance is needed to identify the best practices. Describing these and other governance strategies is the first step toward discovering optimal approaches and stewarding scarce resources for EHR maintenance while permitting health innovation.

EHR customization through clinical decision support, ordersets, and other tools is thought to improve EHR usability and effectiveness[20] [21] [22] [23] but may exacerbate the most common problem reported by CMIOs: the mismatch between high demand for EHR modification requests and low supply of clinical informaticists or EHR analyst time.[19] From the demand side, we find that requests for customization often involve bottlenecks during request intake, where requestors have high levels of specific domain expertise but limited informatics knowledge/experience. This may lead to inappropriate, infeasible, or unrefined requests and was reflected in a majority of CMIOs reporting that clarifying requests was the slowest step in the governance process. One solution to this issue was modeling the request intake process after “user stories” from Agile software development, in which each request is framed in a narrative format around who the modification is for, what the goal of the modification is, and why this goal is meaningful.[9] [24] From the supply side, although customization is thought to be key, the value of governance on the road to modification may be opaque to both end users and operational leaders. For example, Tokazewski et al describe an innovation to improve medication refill protocols, in which the end user may see a significant improvement in their experience, but not realize the governance effort required to create and monitor such systems.[23] Furthermore, a lack of metrics around the effects of governance in terms of patient or provider outcomes makes it difficult for health systems to justify allocating additional resources to overcome these bottlenecks. One system empowers end users submitting EHR modification requests to be champions throughout the governance process and requires these champions to define and evaluate metrics for success for each such request. Defining, monitoring, and comparing these KPIs may be one solution to justifying the value of this work and addressing the supply side of this mismatch.

Our data cannot yet provide the full picture of what successful EHR governance looks like, but we can provide some initial examples of feasible KPIs that span both process and outcome domains. Although we highlight common challenges, more focused study into subproblems like the specifics of informatics staffing, team composition, and organization by service line or approaches to informatics literacy among end users may provide additional granularity and insight into potential solutions. After examining these challenges, an important next step will be the creation of governance performance metrics. Based on our respondents' answers, one option might combine the process of governance for a request (time to request clarification, time to governance, and person-hours to build), with a valuation of the change effected by that process. For example, a request for a new clinical decision support alert, which took 5 months of governance prior to build handoff due to significant changes in the build specifications to allow for a more tailored, effective build may have higher governance value compared with a request that required 3 months to build with minimal changes. Such an approach could not only allow organizations to justify clinical informaticist/IT staff time for EHR modification; it could also allow them to regularly evaluate their governance process, diagnose problems, and localize the specific steps involved. Such a KPI could inform leaders about the cost/benefit of including a health equity consideration and other steps within the governance process. Standardization of a KPI across systems would also lend to comparisons and insights into best practices and streamline the process of improving patient care through EHR modification.

Our study has several limitations. First, given the sample population interviewed, our findings may not be generalizable to non-AMCs or possibly to AMCs without a clinical informatics fellowship program, including community medical centers, county medical centers, and Veterans Affairs medical centers. Such organizations may have more limited informatics resources or different priorities/constraints for use of those resources. Future studies aimed at these organizations will help provide a more complete understanding of EHR governance practices. Second, given the facilitated interview format of the study, responses are subject to recall bias. Third, interviews were conducted with CMIOs of an organization who may be more familiar with the overarching structure of the EHR modification process but may not know about detailed aspects of governance. Future studies delving deeper into the pain points identified above, investigating the current landscape of EHR governance and differences in resource allocation at non-AMCs, and developing governance performance metrics and incentives could help achieve a more successful EHR modification process for patients, end users, and IT staff.


#

Conclusion

EHR modification processes at large AMCs are marked by substantial variation in the face of common challenges. We highlight several novel solutions to these challenges including a two-tiered scoring system, a more rigorous request intake process, and unique user partnerships. Our study is an important first step in understanding the EHR customization process, a need which will only grow as the digital transformation continues and the breadth of electronic and digital health tools becomes more complex. To meet this need, future studies aimed at further investigating the problems highlighted here, developing governance-related KPIs, and analyzing high-performing AMCs can lead the way to systematic identification of best practices and speed improvements in care.


#

Clinical Relevance Statement

We describe the results from interviews of CMIOs from AMCs across the United States to provide foundational insights into the landscape of EHR modification approaches, styles, and strategies. Key results include wide variations in governance practice patterns despite common challenges, some unique solutions to these problems, and a lack of formalized EHR governance related metrics to help organizations compare strategies to streamline the process of improving patient care. We believe our study is an important first step in understanding the EHR modification process, a need that will only grow as the digital transformation in health care continues.


#

Multiple-Choice Questions

  1. Which of the following was not reported as one of the top 3 challenges to EHR governance?

    • Bringing all of the necessary stakeholders to the table and aligning around the governance process

    • Getting representation and input from a diverse array of clinicians into the EHR governance process

    • Negotiating with EHR vendors for access and ability to make certain modifications

    • Low supply of informaticist and analyst time paired with high demand for EHR modifications requests

    Correct Answer: The correct answer is option c. According to the interview data summarized in [Table 1], the mismatch between demand for requests and supply of informaticist time was the most commonly mentioned challenge to governance by a large margin. Other commonly mentioned challenges include bringing stakeholders together to buy into the governance process and encouraging a diverse range of clinicians to participate in the governance process. However, working with EHR vendors was NOT a commonly described problem.

  2. Why would a governance-related metrics or KPI help organizations shape their EHR modification process?

    • Governance metrics could also for more standardized comparison of EHR modification practices across organizations and could help identify best practices

    • Metrics for governance could help better track and identify the value of governance to the EHR modification process to justify allocating more resources to overcome bottlenecks

    • Metrics that are monitored live could help identify problems within the governance process and aim modifications or solutions to the appropriate step in the process

    • All of the above

    Correct Answer: The correct answer is option d. A governance metric or KPI could serve multiple purposes within an organization and across organizations. Within an organization, governance metrics could help demonstrate and quantify the value of the EHR modification process to the health system at large to equitably allocate resources to the process. It could also help organizations diagnose problems within their process and identify key steps where bottlenecks arise relative to their value or addition to the process. Finally, having common metrics across organizations would allow for more appropriate comparisons to identify best practices. As all of these are potential uses for a governance metrics, all of the above is correct.


#
#

Conflict of Interest

A.A. reports that he is the founder of Kuretic Inc, which has no relationship to this work. S.A. reports receiving consulting fees from AstraZeneca, Diazyme, and Agilent Biotechnologies; none of which have any relationship to the contents of this work. R.K. reports receiving royalties from HillRom, which has no relationship to the contents of this work. The remainder of the authors declare that they have no conflict of interest in the research.

Protection of Human and Animal Subjects

The study was performed in compliance with the World Medical Association Declaration of Helsinki on Ethical Principles for medical Research Involving Human Subjects and was reviewed by UCSF Institutional Review Board.


Supplementary Material


Address for correspondence

Akshay Ravi, MD
Department of Medicine, University of California
San Francisco, 521 Parnassus Ave., Box 0131, San Francisco, CA 94143
United States   

Publikationsverlauf

Eingereicht: 06. April 2023

Angenommen: 07. August 2023

Accepted Manuscript online:
08. August 2023

Artikel online veröffentlicht:
25. Oktober 2023

© 2023. Thieme. All rights reserved.

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany