Most people think that the role of the Project Management Office (PMO) is similar to that of an airplane pilot. It is their job to make sure that the plane gets to its destination without crashing into the ground.
While that’s true, and a critical role of a Project Management Office, it may not be their most important function. The PMO also plays the role of air traffic controller. That is, they need to prevent projects from colliding with other projects.
At Ministry Health Care we typically have 100 IT projects in flight at any time. All of these projects are competing for the same IT resources. If each of these projects are planned in a vacuum inevitably there will be midair collisions as different projects try and grab the same resources during the same time period. By tracking resource assignments on each project, the PMO can stack all of the projects on top of each other and make sure that no one resource is over allocated during a time period. This allows us to launch more projects, more quickly in the same period of time. Just like an air traffic controllers make it possible land planes more frequently using the same number of runways.
Recently, somebody challenged me to describe what it means to run IT like a business. This is what I came up with:
- Businesses have customers (not users).
- Businesses thrive by providing their customers with goods and services that their customers want at a cost those customers consider a value.
Sometimes we find ourselves providing services that our customers don’t want. That could mean we got ahead of ourselves and started providing a solution without first providing the consulting service that creates the desire to receive the service.
Misunderstandings, even small ones, can result in thousands of hours of wasted work. It is really important in our field that we clearly communicate. One common source of misunderstanding is the pronoun. I would encourage you to listen to how often people use “it”, “that and “them”. I have made it a habit to ask my direct reports not to use pronouns, to explicitly state by name the people, places and things to which or whom they are referring.
Pronouns recently got the President in trouble. Trying to borrow a page from the Elizabeth Warren playbook the President recently said:
“Somebody invested in roads and bridges. If you’ve got a business—you didn’t build that. Somebody else made that happen.”
Many conservatives believe that “that” is referring to the “your business” as in “You didn’t build your business. Somebody else made your business happen.”
Supporters of the President are saying that the “that” is referring to the “roads and bridges that the business benefits from” as in “You didn’t build the roads and bridges that support your business. Somebody else made the roads and bridges happen.”
I am NOT going to engage in a political debate, so don’t even bother leaving a comment about the political context. I am just saying that this was an ambiguous statement. The type of ambiguous statement that can result in a failed project or political campaign. Clarity is important. It is important to craft messages carefully and clearly to get the desired result.
Ministry has been championing “real-time documentation”, that is, the practice of entering patient information into the EHR at the time it is collected. Historically, caregivers have clung to the old process of writing on paper and then re-entering it into the EHR later. Our Nurse Informaticians are doing the hard work of changing that practice. In the areas where we have seen the change, the nurses are reporting that it has given them more time to spend with their patients. The elimination of transcription also means real-time documentation is a more accurate practice.
The following headline caught my eye as I was reading through my RSS feed:
How to deploy ERP in 120 days
As soon as I read this headline I knew I was going to unleash a rant.
Caron Carlson wrote this piece, and it was a good story about Johnson & Johnson’s acquisition of a new business unit and how that business unit was transitioned to J&J’s ERP system (and other technologies) in 3 months. I am sure that this was a phenomenal accomplishment by J&J that required a lot of bright and talented people. I would bet that they have prepared for acquisitions like this and have a plan in place to quickly incorporate new business units (something I need to develop for Ministry).
I always enjoy reading Caron’s stuff. But, I have to pick a bone with her. This headline is inaccurate. J&J did not implement an ERP in 120 days. They added a new facility to an existing ERP (which probably took years to develop).
That may seem like a nuance, but it is frustrating to CIOs. Healthcare executives read these headlines (but not the articles) and then develop the false impression that a company can deploy an ERP in 120 days. For any company that even thinks they need an ERP a 3 month implementation is not possible. Most companies can’t negotiate the contract in 3 months.
The software vendors are already feeding unrealistic time frames to business unit leaders because they know long projects need a different level of review and decision making that could interfere with their desire to close a deal quickly. It is the bane of my existence. Add the unrealistic time frames with these other gems I hear passed on from my non-IT coworker that are talking to the software vendors:
- “None of our clients have never had any problems with their implementations”
- “Our solution takes no IT time”
- “We already have interfaces off-the-shelf that will work in our environment (without knowing anything about our IT environment)”
- “We do all the work”
- “This software is so simple you don’t need to worry about project planning and management”
Most of these software sales people are good and decent people. They are valuable resources and enjoy working with them. But they are not the best resource for information about the actual implementation. We should rely on the history we have implementing nearly 100 software projects a year. That is the unbiased data. The software sales person is not present at the implementations and has too great of an incentive to provide unbiased information. Just because they believe it, doesn’t make it true.
So, if you are in the technology press (especially serving IT leaders) give us a little help. Don’t reinforce inaccuracies told In the software sales cycle .
Remember, the HITECH act (aka Meaningful Use) is a an incentive program, not a mandate. As we look at stage 2 we will be evaluating the increasing effort against against the decreasing financial incentive – remember stage 2 is worth less than half than stage 1.
Sure there is a supposed penalty, and we will need to take that into account too. But that penalty, starting in 2015 (or later), will be based on the amount of Medicare increase. Medicare may not be increasing by 2015.
Before I pitch a multi-million dollar effort to the senior management team we have to evaluate the ROI.
The other consideration is how much of the Stage 2 objectives are in synch with our patient care executives vision for clinical IT.
Our first hospital to attest for EHR Incentives is expected to receive $3,173,094 for Stage 1. To qualify for that incentive we spent $381,133. This includes the cost for 5,219 hours of IT time to complete the work.
So, it surprised me when I was listening to a CIO discuss Meaningful Use on one of the hscio.com podcasts. He stated that Meaningful Use was an underfunded mandate. That is far from our early experience at Ministry.
I don’t think either of us are incorrect. We just appeared to be starting from different positions and we took different paths to attest for Stage 1.
In our pursuit of the EHR incentives provided under the stimulus bill we piloted one hospital to create a standard approach for the remaining 14. Our pilot site was our most technically sophisticated hospital, so the work to be done was less than typical. In fact, this hospital (Ministry Saint Clare’s Hospital in Weston, Wi) is an all digital hospital that has had virtually all orders entered by physicians since 2006. We have invested over $100M in IT at this hospital, it is rewarding to know that we made decisions that positioned us well to achieve Meaningful Use. This incentive money offsets a small portion of that investment.
I believe that the effort to get this hospital positioned to attest for Stage 1 was as close to minimal as any hospital in the country. In my mind this is a best case for return on investment. Our remaining hospitals will be closer to break-even.
One thing that is not significantly different between my experience and the CIO on the podcast is the software. We both use GE Centricity Enterprise as our core HIS system. However, we did self-certify Centricity (and a collection of other EHR technologies) rather than upgrade to GE’s certified version. This also saved us money and allowed us to move quickly.
In January I wrote about the importance of using Root Cause Analysis at Ministry Health Care as a way to learn from our mistakes. This process is so important to us that we have an employee (Fred) that oversees Root Cause Analysis and facilitates the meetings. Those meetings are generally calm meetings that take place after the IT service interruption is addressed. That is not the case when we are in actual firefighting mode.
We have learned a couple of things about fighting fires, that is, addressing customer impacting service interuptions. We have learned that best way to respond to service interruptions is counter-intuitive and kind of complicated. So, we have done what we usually do when we want to improve something. We created written guidance on how to respond to IT Service Interruptions and we are constantly improving that written guidance.
The primary way we address an IT Service interruption is through the use of a Critical Response Team. The Critical Response Team has two primary goals:
- Cure the service interruption as quickly and completely
- Communicate to our impacted customers in a timely manner that satisfies the information they desire
Prior to developing our Critical Response Team methodology we seemed to fall into the trap that we should not bother the technical resources so they can fix the problem as quickly as possible. This is a huge mistake. Even if the duration of a critical application outage is extended by a great deal of time, it is critical to communicate the relevant facts about the outage to the customer. Time and time again we see that when we handle the communication well, the customers empathize with out plight and thank us for our efforts. If we go dark, we receive a lot of criticism, even if the efforts to resolve the problem were heroic. In essence, we buy ourselves time when we are good communicators.
When we form a Critical Response Team the meetings have three primary agenda items:
- Define the problem.
- Develop an action plan, with clearly defined assignments, to research the problem or resolve it.
- Develop the communications including the message and the audience.
By nature people want to get off the call after number 2 and assume someone else will handle the communication. But we find that the communication must be written during that call while the technical experts are still on the call. This is the only way we get it right and it reinforces the importance of communications.
There are some keys to communicating with customers regarding outages:
- Communication coming from a named individual is critical in how the customer perceives the authenticity of the message. Critical Response Team messages should come from a person, not a generic mailbox.
- Tell the customers that addressing the interruption is our top priority and our team is dropping everything.
- Tell the customers that we know that this is impacting their ability to be efficient and effective and that we feel their pain.
- Tell them everything we know about the effects of the problem on them. Avoid the technical details, write the message from their perspective.
- Let them know that we are sharing everything we know, but things may change as we learn more.
- Provide an estimate about the duration of the outage. IT generally doesn’t like to do this because they think they will be held responsible for estimates given with incomplete information. But the customers need this because this will determine if they go to downtime procedures, if they should arrange overtime or if they should plan to bring in additional staff.
Let me know if you would like a copy of our Critical Response Team approach. As with everything, it is a work in progress. Just like our Root Cause Analysis changes the way we operate in IT, we perform Root Cause Analysis on our response to service interruptions and improve our Critical Response Team approach.
This bit of brilliance comes from Ministry’s Northwoods region (yes, we have a Northwoods region – how cool is that?). The supervisor of our desktop support team has three simple goals for every project his team works on:
- Happy Customers
- A bored Project Manager
- A tech released to work on IT Operations because no hardware is breaking and everything was executed to plan
I wish I would have come up with that. Simple, memorable, powerful.
I used to think about the day when I fixed everything so we would stop IT outages. Of course that is silly. Like other healthcare organizations we are adding applications to the portfolio every year as new solutions address previously under automated areas. Most of these are not core parts of the IT architecture, but they are supplemental such as documentation systems for clinical departments (e.g., rehab) and contract modeling systems.
With the increase in the number of applications in the portfolio comes complexity. In addition our infrastructure is becoming much more complicated including a more sophisticated network; changing virtualization technologies; and complex storage.
So, our IT Operations philosophy is to perform a Root Cause Analysis on every critical service interruption. Our Root Cause Analysis asks three things:
- How can we prevent this type of outage in the future?
- How can we detect this type of outage in the future?
- How can we respond to this type of outage more quickly?
The second two questions are important. Even if the cause of the service interruption is s simple fix, sooner or later stuff is going to hit the fan. We want our IT folks to see when it does and already be communicating to our customers how we are fixing the problem before they call us.