Annually I work with Ministry’s IT Customer Advisory Board (our IT Steering committee) to identify the IT projects for the coming year. Like all capital budgeting processes, we have a IT capital target that is based upon a number of factors like recent financial performance and competing capital projects (usually new imaging equipment and construction projects).
At Ministry we really have two targets, money and time. As I have posted previously, we estimate how much time each IT employee has to work on projects (as opposed to support). We add all of that time to determine the total project time for the year. I am simplifying things, but you get the idea.
If we don’t spend as much capital as we had planned then we can save that money to spend in the future. However, time is different. Every hour that we had reserved for projects is lost forever if we are not using it that way.
We have such a great demand for IT projects, it is important to make sure we do not let that time go unused. In past years we approved projects, then waited for those championing the project to bring them forward. The problem with that approach is that our managers are so busy they tend to wait until the latter half of the year to get things going. In the mean time that time set aside for projects is going unused.
This year we are encouraging our business leaders to getting things moving sooner and telling them the resources are available now. This should better use scarce IT time and reduce the number of projects that carry-over into the next year (which ultimately reduces our capacity for a given year).
I will let you know how that works.
8 thoughts on “Managing The Project Pipeline”
It might also happen that the technologies/processes that are planned for your projects might become obsolete. This will put your plans behind further 😦
But won’t spending less money than you originally planned lead to a reduced budget for next year. I’m just asking because almost all IT Managers out there try to milk the year’s budget to the last drop to increase (or at least have the same) budget for next year.
That is not the case. Historical spending is not a factor in determining investments. Frankly, I am usually in the position to ask leadership to reduce their expectations for the amount of IT we can implement in any given year.
A manager that spends money to assure themselves of a certain funding level is not a good manager.
In order to improve processes you need to review current processes to determine what will be employed to move an organization to the next performance level. One method is to use legacy systems as the foundation to move data elements to the right place at the right time to improve the knowledge available to care givers. This can be accomplished by mining legacy databases to send the mined data to a web browser with a “need to know” security feature for each user of the system. Using this method will get data owners to use their allocated time more efficiently since they are aiding the entire organization by improving processes through innovation.
PM Hut, not true. The day’s of milking historical budget levels are long gone. Where have you been?
Gene Kim and I have done a bunch of research on this subject and it turns out that measuring and controlling unplanned work is key. In our empricial research project at the IT Process Institute we found that high performing IT orgs get 8x more project done. They have less than 10% of their opex labor budget spent on unplanned work (firefighting, audit and security drive-bys etc).
I would love to talk more with you about this! I am @kevinbehr on twitter.
In my experience the trick is keeping the time allocation to capital works from being changed to operational without adequate consideration. Now some fires do have to be fought immediately, sure, but there’s more quid than pro quo.
Does your research show any governance trends that might help on that front?
Gene and I actually sought to find the Pareto of controls and process in ITIL and CObit. We knew that high performers used less controls, had better uptime and managed more with less staff.
The big surprise was how much better the high performers were than the rest of the pack. They were able to get up to 8x more projects done, execute 14-16x more changes with one half the change failure rate. In the rare event they did experience an outage their MTTR was as little as one-tenth of the low performers!
There are some major trends and lessons learned from this particular study. I have been helping IT orgs benefit from this by building on on ramp to organizational performance improvement based on this and other scientific studies we have conducted. If you want to talk more about this feel free to contact me on my blog!